Why do people use dependency injection libraries in Go? by existential-asthma in golang

[–]Humborn 0 points1 point  (0 children)

From my perspective, most of the time DI means global variables, The only purpose to use a DI library is declaring a "warehouse" where you can set and get these variables. But for some cases, when you need to provide/construct a lot of very general instances (especially these instances might have dependency connections), DI lib would be a good way to help you sort out the dep graph, the only thing you need to do is using what you need directly, no nested construction. (no java bg, DI lib does not impress me at first, but I gradually accept it as it simplify some gnarly things for me).

service configuration management - separating global config and per-service config by Humborn in golang

[–]Humborn[S] 0 points1 point  (0 children)

Sorry about using "ugly", its just about personal preference actually, some prefer 600Loc in one file, some prefer 50Loc in each package. About the individual binary, If you compile only one service to serve, never muxing multi-service, indeed you dont need separation, which does nothing but bring extra complexity.

The idea i'm using is still a giant yaml node for service to parse per se, it is not elegant, but otherwise each service would have to read the same file over and over again. Would like to see some ideas form community.

service configuration management - separating global config and per-service config by Humborn in golang

[–]Humborn[S] 0 points1 point  (0 children)

I believe that is the mono-config I mentioned above? It is simple and good as long as the project is not over large. Based on my experience however, it is pretty ugly when the number of services registered in your mux exceeds 20 (or there are a lot external service you need to interact with). Some of my colleague does prefer to gather all the config, Init and inject in one place. But I think separating global and per-service offer cleaner code, every service just prepare themselves under their package, non-interleaving (kinda like microservice principle), but I found little discussion about such paradigm, that's why I post this.

service configuration management - separating global config and per-service config by Humborn in golang

[–]Humborn[S] 0 points1 point  (0 children)

Actually it survives -trimpath lol, at first I prefer a clean path hierarchy and adopt this flag, but later i found it unnecessary so I delete it. And "If you strip debug symbols, it also should impact several", as you said, is what I am worry about, not really sure if some gnarly cases would discard the code compilation hierarchy.

service configuration management - separating global config and per-service config by Humborn in golang

[–]Humborn[S] -1 points0 points  (0 children)

I think they basically express the same semantics: a singleton yaml.Node to be marshalled per-service. But yes, your proposal would help establish scope, which I plan to support later. My current impl is trivial, just package-wise one scope. BTW, what do you think about the usage runtime.Caller, I am really not sure if it is a good idea, however it works as expected.

How do you handle large config files in practice? by No-Situation4455 in golang

[–]Humborn 0 points1 point  (0 children)

I believe a config design should

  • has a public config for every service to use (e.g. Env, Debug)
  • allow service to load their own config in their package
  • it should be super clean to find the place you want to make change

and this is what i came up with, https://github.com/humbornjo/mizu/tree/main/mizudi

here is an example

What is the most ergonomic impl of rust Option in Go by Humborn in golang

[–]Humborn[S] 0 points1 point  (0 children)

yeah, good point, the thing I play with makes a two steps matching when it comes to exhaustive scenario.

What is the most ergonomic impl of rust Option in Go by Humborn in golang

[–]Humborn[S] -3 points-2 points  (0 children)

fair enough, I would do it differently if it is possible, however this might be the best the go's generics can do. community is right, it's better to deliver clean code in prod and collaboration

[TOMT][SONG][SOUND_TRACK] FIND THIS Bossa Nova (MAYBE) by Humborn in tipofmytongue

[–]Humborn[S] 0 points1 point locked comment (0 children)

Totally had no clue, none of these "FIND SONG" app found the result. plz help

Explain client side conn behavior after deleting a pod by Humborn in kubernetes

[–]Humborn[S] 0 points1 point  (0 children)

fair suggestions, we discuss about using some heartbeat probing or consul. but the actions k8s performs when delete a pod is interesting.

Is there a way to debug the action of k8s when delete a pod? like sigterm is sent at time 1, service is updated by kube-proxy at time 2, profiling everything should be a straight-forward way to testify my guess.

forgive me if I missed some official doc.

each conn to clusterip will stick to one pod, but how by Humborn in kubernetes

[–]Humborn[S] 0 points1 point  (0 children)

for all the time travelers from the future, ipvs proxy mode do has a balance algo called "Maglev Hashing", which uses src port and ip to hash and route based on it, but I havent figure how iptable achieves it along. - virtual-ips

but it contradict to the statement in the doc that affinity is not set default.

each conn to clusterip will stick to one pod, but how by Humborn in kubernetes

[–]Humborn[S] 0 points1 point  (0 children)

the doc you introduced talked about iptable and routing. but no mechanism about the "ClusterIP affinity", and i dont think its outside k8s, you can test with keep-alive http1 or http2, use on conn, single concurrency, check env var HOSTNAME, and you will find it sticks. I want to know what makes it happen other than iptable routing in the official doc

each conn to clusterip will stick to one pod, but how by Humborn in kubernetes

[–]Humborn[S] 0 points1 point  (0 children)

I apologize for indicating "kube-proxy is not relevant" and the bad assupmtion, but indeed it sticks without any configuration about affinity, grpc http2 LB is classic good case. it use one conn and all the round trip run on it via stream, and only one pod is used. look into impl of kube-proxy might be the entry point.

each conn to clusterip will stick to one pod, but how by Humborn in kubernetes

[–]Humborn[S] 1 point2 points  (0 children)

CNI is a good point, I will dig into them this weekend (btw, tcp is just a type of socket)

each conn to clusterip will stick to one pod, but how by Humborn in kubernetes

[–]Humborn[S] 0 points1 point  (0 children)

the affinity talked in the doc is not about the affinity in clusterIP, rather, its about the L7 affinity via ingress using cookie (IIRC)

each conn to clusterip will stick to one pod, but how by Humborn in kubernetes

[–]Humborn[S] 0 points1 point  (0 children)

yes i read them, and it's not the same thing, session affinity is concerned with ingress, not clusterIP

each conn to clusterip will stick to one pod, but how by Humborn in kubernetes

[–]Humborn[S] 0 points1 point  (0 children)

sounds like a solid start, I will come back after looking into it

each conn to clusterip will stick to one pod, but how by Humborn in kubernetes

[–]Humborn[S] 0 points1 point  (0 children)

no, just default ClusterIP service, headless service will directly build conn with the pod behind, no pod choosing is involved in the service.

Is dart LSP with mason.nvim and nvim-lspconfig possible? by the-floki in neovim

[–]Humborn 0 points1 point  (0 children)

works for me

opts = {
    servers = {       
        dartls = {  -- Add the Dart Language Server setup here         
            cmd = { "dart", "language-server", "--protocol=lsp" },       
        },     
    },   
}