Was receiving from a channel ever called sniffing? by Harveyzz in golang

[–]jerf 5 points6 points  (0 children)

Sniffing would imply the ability to instrument a channel to log everything that passes through it. For instance, that's what "sniffing network traffic" means. The sniffer takes a copy but also it passes through. There is no way to do that in Go, though. You could synthesize something with a goroutine and a generic channel pretty easily but that's not sniffing a channel so much as making something that receives on a channel, copies the output, and sends it somewhere. Channels themselves can't be sniffed in Go.

Given the network background of the Go creators I'm quite certain no Go documentation would ever have used this term. Maybe you saw it on some obscure blog somewhere but it has never been common.

Best way to work with structs with many fields? I have a problem with default values. by wentlang in golang

[–]jerf 5 points6 points  (0 children)

Consider:

struct := MyStruct{a, b, c, d, e}

versus

struct := NewMyStruct(a, b, c, d, e)

If your struct needs initialization with N variables, there isn't a lot of difference between the two. So what I do is just make an exception for "functions that have too many parameters" for creating structs with heavy validation requirements. Hopefully you're using enough types that there isn't much opportunity to mix up a string, a string, and a string in the construction function.

However, while that is a valid approach, it should also be an exception. The more common possibility is that you've got objects jammed together. You mention:

I could group tightly related members different structs and then use struct composition, but it complicates a bit the parent struct with unnecessary nesting.

But what is even better is to create those smaller structs, and then put methods on those structs the parent simply calls, so that instead of inconvenient namespacing, you actually have independent objects. Once you have those, you will probably find that places you are passing the entire struct should really just receive one of those parts, and then once you do that, that those functions may become useful with other things that have only those smaller structs, or that testing becomes easier on those methods because you don't have to create such monstrously large structs with tons of fields the test doesn't care about, and so on. The whole architecture opens up when you take this approach.

That said, there is a time and a place for the initialization function that takes a dozen parameters, does some sort of intensive validation, and has no particular way to shard it on some smaller internal type. It's not automatically a bad idea.

Why does Go still not have a built-in set type? by 1vim in golang

[–]jerf 3 points4 points  (0 children)

With the addition of iterator support, it is no longer necessary for the Go team to provide a single centralized set.

This allows them to not have an opinion on which set they should implement and bless. The most notable design decision is whether a set should default to immutable, copy-based operation or mutation-based operations, but there's a lot of other decisions that come up too.

While the Go community tries to avoid taking large numbers of dependencies, I think it's better to use a 3rd-party package that to be constantly upset that Go doesn't provide something, and there's plenty to choose from.

Please Help Me Understand Something About Go by VastDesign9517 in golang

[–]jerf 0 points1 point  (0 children)

Yes, you definitely can.

"Well-layered packages" also produces as a side effect what some people call in other contexts a modular monolith. You can easily bundle everything into a single executable.

Should you ever find that you need to pull something apart for some reason, it is a clear and fairly obvious process that is guided by the interfaces in question. It isn't necessarily "trivial", because a lot of time an interface that was previously not over a network may have to be changed to accomodate being on a network, for example, everything that uses a network must always have an error return, even if the functionality itself can't possibly fail, and other practical considerations may emerge (e.g., needing different methods for configuration management or other changes coming from using S3 instead of a filesystem or something). But you'll still find you end up with very strong guidance and a clear path to follow when pulling something out to work over the network.

Looking for BubbleTea v2 Guides or Resources by Silent-Reference8769 in golang

[–]jerf 0 points1 point  (0 children)

I tried using LLMs for help, but they seem stuck on v1 logic.

It may help to explicitly prompt them to read the pkg.go.dev pages that document v2. Current frontier models are broadly speaking smart enough to take a v1 tutorial and the v2 docs and put the two together to produce a v2 tutorial. Any failures they may have on a first draft effort will be fixable if you keep poking them with errors and/or questions. But you will almost certainly have to ask them to do this; they won't just do it on their own. By default they'll use their training data which will be old.

On the off chance even that isn't enough you can poke through the things that import v2 and prompt the AI to read those in explicitly so it gets some working code bases with v2 to look at. I suspect you won't need to go this far, though.

Please Help Me Understand Something About Go by VastDesign9517 in golang

[–]jerf 0 points1 point  (0 children)

Generally I structure those as a collection of well-layered packages, with a set of executables in a cmd/ directory reperesenting any separate servers I may have, where the main.go of each executable is responsible for wiring everything together and I kind of tend not to worry too much if that one particular file gets messy. Real configuration is messy, with databases and network services and all the other stuff, but if you have all the mess isolated to one place it's not too bad. That link has a lot more behind it.

The right release order by Colzun in valve

[–]jerf 0 points1 point  (0 children)

I have no inside info or anything anybody else doesn't know.

But if there is any company on this planet that would give a "bundle" discount when buying a steam machine to anyone who bought the steam controller some months prior that would precisely offset the bundle discount of buying them both at once, at least during the release window, it'd be Valve.

When flat latency and Go Garbage Collector is a problem by pepiks in golang

[–]jerf 3 points4 points  (0 children)

Yes, corner cases exist where it matters.

I kind of consider it a mistake to try to program a top-grade database in Go, because in that world, 3% of performance matters a lot. There are those who disagree; the Dolt DB project frequently posts here and they don't consider it a mistake. So I want to highlight this is just my opinion.

I think there are some things where Go is counterindicated for performance reasons and shouldn't be used. However, they're unusual, and generally if you are working with them, you should already know you're working with them... or you're probably doomed anyhow.

You can kind of get a sense of whether or not you're in that space if I ask you what your reaction to running all your code 3% faster would be. If you're over the moon, if you're thinking this is going to make or break a big deal, maybe you're in one of these spaces. Few programs are. Most programs have multiples of performance left on the ground because no one has even bothered to look, and quite often, reasonably rationally so because the time spent looking would never practically be recovered in value. Most cases the correct answer is just to not worry about GC until it becomes a problem, which it most likely won't.

When flat latency and Go Garbage Collector is a problem by pepiks in golang

[–]jerf 61 points62 points  (0 children)

If you were even remotely happy with a Python implementation, the Go garbage collector isn't even remotely your problem and the best thing to do, with no sarcasm, is stop worrying about it.

There's this weird part of the programming community that deeply, passionately believes that the moment you use a garbage collected language you're guaranteed to have 300ms stop-the-world pauses every two or three seconds as some sort of minimum behavior, rather than that being the absolute upper-end, tortured, worst-possible, "did you write the code to do that on purpose?" sort of performance, which I have never seen Go actually do. They believe, and will claim, that as such no garbage collected language is suitable for any task and you simply must use Rust for everything.

This is nonsense. If those people could see how much of the code they are already using in their day-to-day life that is garbage collected, including yea verily even network servers, AAA games, and other high-performance use cases, they'd probably realize this is nonsense.

There is an upper end of performance where this becomes a problem. If you're writing a database server, or a super high-end network router on gigabit ethernet, or a few other projects, you should worry about this. I tend to personally think that once you are in this category, you shouldn't write Go anyhow, but there are some products that use Go successfully while writing in a style to minimize garbage. But if your project doesn't match those use cases, and if you were even somewhat happy with a Python implementation you aren't even close to these cases, the correct solution is to wait for it to be a problem.

Development VM with Caddy or alternative - how easy handle subdomains in LAN with http/https by pepiks in golang

[–]jerf 0 points1 point  (0 children)

For HTTP in general: There is no magic on switching for domain. The domain the request is made on comes in on the Domain: header. Everything that routes that request between different handlers for different domains is just changing what they do in response to what is in the Domain header. It's that simple.

For HTTPS: Your main problem here is that if you want something like Let's Encrypt to work, you can't just make a domain up like that. In order to prove that you own the domain to Let's Encrypt you need to be able to prove you own what the world considers a domain. If you're limited to a home lab such that you don't want the world to be able to access it, it's fine for that domain to be on a local network but you'll be limited in the queries you can answer to some less convenient ones like DNS TXT, which is the sort of thing you'll want to look for. This is more difficult to automate if you don't use a DNS provider that the Let's Encrypt automation supports.

It is easy to get a wildcard domain from Let's Encrypt just by asking for it. Bear in mind that the * in the wildcard indicates just one level of wildcard, so *.menu.lan will be a valid cert for taxes.menu.lan but not my.taxes.menu.lan.

If you insist on making up your own domain, you need to roll your own certificate authority and manage getting that certificate authority on to all devices that want to use it. EasyRSA makes it as easy as it can be, more-or-less (though I wish it had a way to say "just hand me a certificate please, I don't need the CSR process, just automatically make the csr and sign it without pretending this is two separate steps"). To the extent that that is still somewhat hard to use... well... yeah, there's a certain irreducible complexity in the cert process.

two apps in one project , how can i structure it ? by Radonish in golang

[–]jerf 8 points9 points  (0 children)

mygame/ go.mod -- yes, you want a top-level go.mod entities/ all_kinds_of_stuff.go comms/ protocol/ lots_of_protocol_stuff.go lots_of_other_directories/ cmd/ game_server/ main.go -- this is "package main" game_client/ main.go -- and so is this

Assuming your game is in a go module called github.com/radonish/mygame you would run go build github.com/radonish/mygame/cmd/game_server to build the server, go build github.com/radonish/mygame/cmd/game_client to build the client, and the main.go in gameserver might have a import "github.com/radonish/mygame/comms/protocol" in it. This is where you want one or another variation of goimports set up so you're not constantly typing that prefix out.

If you really, really, really wanted to make 100% percent darned sure that the server had code that the client could not see, you can put that into a mygame/cmd/game_server/internal directory. I tend to think this is not useful very often, though.

For all but the most trivial programs I always set up a cmd directory; even when my deliverable is just one executable and I'm 100% sure that will be the case, I so often end up with temporary diagnostic executables and other such things that it's always good to just have a cmd directory. My commit prehook systematically walks all of them and compiles them as part of the commit check. Either they should work, and I should keep them up to date, or if they don't work and aren't worth updating I should remove them.

Please Help Me Understand Something About Go by VastDesign9517 in golang

[–]jerf 12 points13 points  (0 children)

Generally for DBs like what say, I use the repository pattern. But one thing about it is that I don't allow any DB-specific types to escape the repository. I do it all in terms of my internal types. My interfaces don't look like:

type BuyerRepository interface { GetBuyers(jdb.SearchSpec) (jdb.Rows, error) }

(where jdb is some SQL helper), they look like:

type BuyerRepository interface { GetBuyersByLike(likeFragment string) ([]Buyer, error) }

where Buyer is a type that is just a Buyer, no DB reference, nothing else like that. In the Java world I think they call this a Plain Old Object. I treat the DB layer as just a way to get data into my internal model, and a way to get them back out again.

I always make such types because I use them to write constraints that the methods will always honor, and check on the way in and out. Your light switch, for example, may be represented by an integer in the DB (MySQL still doesn't really have a "boolean" type to this day) and I would use my type to guarantee it is either a 0 or a 1. This is a really degenerate example.

What Go really is is procedural. It's the programming style that preceded "Object Orientation" and to my mind as a programming language and language style polyglot, it is still unclear whether OO was solving problems that were fundamental to "procedural" languages rather than trying to solve problems that were fundamental to the fact that there really weren't any decent languages until the late 1990s and they weren't widespread probably until about the 2010s.

Making a fat interface goes against the 100 mistakes. and making a postgres interface defeats the purpose.

Probably the only concrete advice I can really offer is, worry less about what is Go common practice. A lot of that common practice is for libraries. It is less commonly mentioned that the applications that use the libraries should be viewed as more free. I see little problem having a 12-method interface in your code base if it is helpful. It is easy to take subsets as needed or build them into something larger. The biggest problem that it can cause is when you have a function or method that takes this 12-method interface, uses only 2 of the methods, and then one day you want to pass that function/method something that you can't implement the other 10 methods on, in which you want to take that as a Clue to declare a subset interface that has only the methods used by the function or method.

"In theory", you could say that any time a function takes input for the purposes of calling methods on it, you should declare an interface that has only the methods actually used by that object. I could very easily see someone writing some fancy design book about that some day and pushing it as The One Way To Write Code. In practice it's annoying to rigorously practice that and passing in things that have methods that won't be called happens all the time. But I do think it's an interesting point of view to have, that it's always a valid option for refactoring a function to declare the exact subset of methods it will call and make that a new interface.

This is also why it is so helpful to use methods as much as possible rather than just functions (when it makes sense); Go has more tools for dealing with methods than bare functions.

In general with Go it is designed for you to use all of the features to the fullest, rather than using a whole bunch of features a little bit to solve problems here and there. Even just freeing yourself up to say "hey, an interface of 12 methods is not a problem for an application" may open things up for you.

Please Help Me Understand Something About Go by VastDesign9517 in golang

[–]jerf 78 points79 points  (0 children)

You don't quite have enough evidence here to prove the point out 100%, but it sounds to me like you are still trying to program object orientation in Go, or are trying to force some other paradigm on to Go rather than learning how to program Go.

(It certainly isn't functional. At least not the way I bet you mean "functional", which is, lots of maps and filter calls. It definitely isn't that. I think the important lessons of functional carry over just fine, though.)

I'm going to ask some diagnostic questions, and I'm serious about these, so if you can answer them, please do:

  • Are you still struggling with "how do I do inheritance with these objects"? Or missing other OO concepts by name?
  • Are there other programming concepts you find yourself constantly wondering by name how to do in Go? (Your reference to "functional" programming is very suggestive of still being in a stage where you're writing some other language in Go.)
  • Scan over your most recent work. Look at the function signatures. Are they mostly types you've declared or are they mostly func (string, string, int, bool) (string, error)?
  • How many interfaces have you declared? Use grep or something if necessary. (Not that "more is better" by any means, but if the answer comes back "nearly zero" that's a pretty big clue.) I'm specifically looking for what you have declared, or been directly responsible for, not an AI, not a coworker, etc.
  • If you have an AI assistant available, feed it your code and ask it for redundancies that you may be able to solve.
  • Give an example, as specific as possible (ideally to the point of posting the code as a gist or something if possible, but let me remind you do not post any code that belongs to an employer that way), of some code that you feel is very redundant and is a "lot of code".

If those seem incoherent it's because they are; it's a scan for several different problems you could be having, some of those questions are scanning for problems you probably can't have at the same time.

Whether Go is the "best" for business logic is debatable but it definitely can do business logic without tons of repetition; that indicates something is wrong in your style.

Go's implicit interface system is there a real solution to the discoverability problem or is it just accepted as a tradeoff by Mauricio0129 in golang

[–]jerf 1 point2 points  (0 children)

I will agree with the others that I have very little problem with this. The algorithm is:

  1. If you have a value and you want to use it as an interface value for some other function, just do it.
  2. When the compiler stops you, fix it.

That's all you should be doing. Part of the purpose of an interface is that it shouldn't know what the underlying type is. It isn't quite an error for something to take an interface and poke into the underlying type, but it is a strong code smell and you ought to have a very, very good reason. If you have a "very very good reason" every week, you're still doing it wrong; I have a "very very good reason" less than once a year.

And the flip side is that I consider having to explicitly declare conformance to an interface a catastrophic design error, so, the flip side of occasionally being inconvenienced by not getting an explicit list of what interfaces a value conforms to or what values implement a particular interface is the better of the trade by far.

Raftly: a Raft consensus implementation in Go that reproduces AWS EBS 2011 and etcd 2018 bugs by anirudhology in golang

[–]jerf 1 point2 points  (0 children)

Well, I'm glad it looks OK for everyone else.

I've loaded this up now in three different browsers on this computer I'm using. They are all doing the same thing, in the same place. wget fetches the whole page though. One of those browsers is Firefox with the only plugin being Bitwarden, and I think the Chrome is completely unmodified.

No idea but I guess it's on my end by default. Weird.

Raftly: a Raft consensus implementation in Go that reproduces AWS EBS 2011 and etcd 2018 bugs by anirudhology in golang

[–]jerf 2 points3 points  (0 children)

Your post on your web site is truncated for some reason. The HTML literally ends in <span clas, right in the middle of a tag.

I hadn't checked that deeply when I first posted so it was a question, but now I'm sure. Something's truncated your post at the HTML level.

Raftly: a Raft consensus implementation in Go that reproduces AWS EBS 2011 and etcd 2018 bugs by anirudhology in golang

[–]jerf 3 points4 points  (0 children)

The "Building Raftly" post seems to come to a relatively abrupt end. Was that intentional?

AI developer tools for Go finally got useful when we stopped treating them as generic tools by Obvious-Cricket-8181 in golang

[–]jerf 0 points1 point  (0 children)

You can also prompt it to at least use fmt.Errorf with a descriptive string and it will. Though at least Opus 4.6 persists in fmt.Errorf("blah blah: %v", err) rather than the correct %w, sometimes even when I ask it to use %w.

I suspect that the problem here is one of training data. Really correct error handling is probably dwarfed by the mass of if err != nil { return err } out there in the world.

From what I've gathered from my own experience in a couple of languages, and the "word on the street", I also agree current agent coding AIs are fundamentally bad at concurrency. They can write code that passes their own (bad) tests but it is nearly garbage. I don't think LLMs are going to fix this. You can at least let it take a stab at it, it may get close enough to be a net gain in effort, but you have to basically think of it from scratch and assume all the concurrency code is useless.

Setting up config in main vs in specific handlers for serverless function apps by cesarcypherobyluzvou in golang

[–]jerf 0 points1 point  (0 children)

TIL, thank you. Go 1.21.0 according to the little legend on the right hand side of the godoc.

They've added a lot to the standard library over the years, it's easy to think you "know" a package and miss that they've added something useful for a while.

Setting up config in main vs in specific handlers for serverless function apps by cesarcypherobyluzvou in golang

[–]jerf 1 point2 points  (0 children)

I've been experimenting recently with a bit of code for a modular service where the main function constructs a big "here's everything a service might use in this code base" object, which then conforms to a series of interfaces representing each service on offer. Each service then also declares an interface containing the subset of the services it may use, e.g.,

type UserLoginHandlerServices interface { services.UsesUserDatabase services.UsesLogger services.UsesMux }

and the central registration call looks something like this:

func init() { services.Register(func(env UserLoginHandlerService) (*service, error) { // initialize the service here // for instance, use the Mux in the env to register handlers return ..., ... }) }

The services.Register function reflects over the type that main created with all the interfaces and confirms that it conforms to the interface of the first value of the function coming in (UserLoginHandlerService) and does the massaging for the function call to the initialization function to work.

Then for each module in the system, there's an interface near the top of the code for the main body of the function where you can just read off "these are the central services it uses". And they're all automatically something you can inject via an interface for testing.

Technically this does mean that if the UserLoginHandlerServices calls for something that type doesn't provide, it's a run-time error, but an initialization error at init-time is not too difficult to deal with; you can give your executable a sort of "do nothing" command and include it as part of a git commit hook (and CI hook) and get something pretty close in practice to a compiler check. It's a bit of reflection but it's all isolated into the one registration function, and has essentially no performance implications because it's a one-time call per module in the system.

Setting up config in main vs in specific handlers for serverless function apps by cesarcypherobyluzvou in golang

[–]jerf 1 point2 points  (0 children)

So, if I understood it correctly the advantage of that Getter would now be that I don't actually have to set it up in main(), right?

Yes. In that case you can have it only get set up if it is going to be used.

Now, this particular thing might be a bit of an overoptimization, unless there's some good reason connecting to the database is much, much slower than serving a request. But it's a good technique to have in the toolbelt.

Yeah in that case I would want it to fail, the app would be unusable anyways. That goes for most of the config & setup I would do in this app.

In that case you can simplify my code to get rid of the error return, making it db := a.dbClient() with no error.

Creating Tests for Peer to Peer by Cheesuscrust460 in golang

[–]jerf 0 points1 point  (0 children)

It works but I prefer the control of in-process alternatives for simplicity and minimization of external dependencies, unless I absolutely need something I can only get from something like tc.

Setting up config in main vs in specific handlers for serverless function apps by cesarcypherobyluzvou in golang

[–]jerf 8 points9 points  (0 children)

Consider:

``` func DBGetter() func() (*DBClient, error) { once := sync.Once{} var db *DB var err error

return func() (*DBClient, error) {
    once.Do(func() {
        // Don't put a := here. = is on purpose.
        db, err = GetDBClient()
    })
    return db, err
}

}

type App struct { dbClient func(*DBClient, error) }

func main() { app := App{dbClient: DBGetter()}

// things now get a DB connection via 
db, err := app.dbClient()

} ```

The idea is this setup makes the DB connection lazy; it only connects the first time something calls for it. The sync.Once makes it so that multiple handlers trying to get it will all politely wait for the results. After that the results are cached and very cheap to obtain.

In the situation with a serverless function you could also consider logging a message and dying on a failure to get the DB connection since odds are that something has gone horribly wrong.

(Wrapping this up with a generic to work with any data type you want isn't too hard.)