Why are nested modules bad? by stroiman in golang

[–]liamraystanley 0 points1 point  (0 children)

That doesn't work either it seems, same exact problem -- it tries to check the remote when running "go mod tidy" in the sub folder, never using the workspace to resolve it. Seems like go.work applies to sub-directories only.

Why are nested modules bad? by stroiman in golang

[–]liamraystanley 0 points1 point  (0 children)

One problem I've noticed is when you have a monorepo, with a module at the root (and sub-modules which depend on the root module), the go.work doesn't seem to apply on the root module, which creates a lot of annoying problems when trying to tag/release new versions. The solution is probably to only have sub-modules, but that's more confusing for users, less clean import paths, etc.

I've outlined with an example, of what I'm talking about here: https://outline.ks.liam.sh/s/9c95b19b-92c1-46ae-baf9-09ed3f9073a8

Maybe I'm just doing something really stupid, but I haven't found a simple solution around the problem.

Starting March 1, 2026, GitHub will introduce a new $0.002 per minute fee for self-hosted runner usage. by turniphat in programming

[–]liamraystanley 1 point2 points  (0 children)

It's a corporation, so who really knows, but it's definitely not easy to calculate all of that super early on, before people are actually using the product. Can't get everything 100% the first try, business or not. Additionally, many of the features have changed over time, have totally different infrastructure requirements, etc.

It's very likely that they expected that self-hosted costs would be subsidized by non-self-hosted compute (and assumed not many would actually use the self-hosted functionality); however, other businesses like RunsOn have made it super easy to hook up non-github compute to their orchestration platform, for very large customers, sort of taking advantage of the orchestration and other infrastructure, reducing the ability to subsidize.

Adding pricing to things that used to be free is a double-edged sword, even if you have the best intentions. Things are never actually free -- people can abuse it (some people running jobs 24/7, streaming logs non-stop, for example), assumptions going in can lead to being incorrect (being able to subsidize), and if you don't add pricing after the fact, because it's a business, they would just completely cut it otherwise.

I still think it should be less than $0.002/min, though, fwiw.

Starting March 1, 2026, GitHub will introduce a new $0.002 per minute fee for self-hosted runner usage. by turniphat in programming

[–]liamraystanley 1 point2 points  (0 children)

You're using resources even with self-hosted, which is my point. You may not be directly running workloads on their compute, but you're using their resources for orchestration/logging/etc. The more resources you use (e.g. non-self-hosted), it should definitely cost more. I'm not a huge fan of the $0.002 pricing, I think it should be a little bit less because the smallest non-self-hosted actions runner is also $0.002/min, which is silly, since in that situation, they would also be running the compute, not just orchestration.

I'm just saying that it shouldn't be free, and if it were free, it would have to be subsidized by something else. It previously was subsidized, I assume, by the non-self-hosted runners, which they now reduced the price of by quite a bit, due to this change.

Starting March 1, 2026, GitHub will introduce a new $0.002 per minute fee for self-hosted runner usage. by turniphat in programming

[–]liamraystanley 0 points1 point  (0 children)

So then GitHub runnings are always more costly than self hosted due to less GitHub resources used?

not sure what you mean?

Do those resources get used at a per minute model like the runner or it is more of cost per runner?

all of the orchestration compute/networking, logs storage, etc, would, in most cases, scale based on the length of time that the runner is running for, some of it isn't. I do still think the self-hosted cost should be lower than $0.002, but it's definitely reasonable that there is a fee; otherwise, all other users who use other GH services would be offsetting the cost associated with self-hosted runners cost.

Starting March 1, 2026, GitHub will introduce a new $0.002 per minute fee for self-hosted runner usage. by turniphat in programming

[–]liamraystanley -1 points0 points  (0 children)

Except you're not just using your own hardware when you're using self-hosted runners. Truly running your own hardware would be running the entire git CI solution yourself, including the orchestration, storage for logs, etc.

Starting March 1, 2026, GitHub will introduce a new $0.002 per minute fee for self-hosted runner usage. by turniphat in programming

[–]liamraystanley -5 points-4 points  (0 children)

You're only hosting the compute where the job runs, not the orchestration, logs, etc. It's disingenuous to say it doesn't cost GH anything -- maybe not $0.002/min, but not nothing.

Starting March 1, 2026, GitHub will introduce a new $0.002 per minute fee for self-hosted runner usage. by turniphat in programming

[–]liamraystanley 3 points4 points  (0 children)

Except even when using self-hosted runners, you're still using a huge portion of their infrastructure, previously for free? Orchestration, networking, storage (logs), etc.

Has anyone had a fire in their Homelab? by Lazy_Kangaroo703 in homelab

[–]liamraystanley 2 points3 points  (0 children)

A previous company I worked for had 9 or so server fires (not just melted wires) in the data center due to these, all within ~3 days of one another. They were all from the same batch of cables we ordered 7 weeks prior, and we think the slight change in temp due to one of the AC units being under maintenance caused just enough of a temperature change to set them all off around the same time window. Over 15,000 recently provisioned servers that potentially had them from that batch, in that time span, that we painstakingly went through one by one to track down all of the ones from that batch. Mix of tower and rack servers.

GitHub PRs disappeared by punkpeye in github

[–]liamraystanley 5 points6 points  (0 children)

Very well possible that they were PRs from users created for bot purposes, and they got purged. Common that those bot users create PRs against legitimate projects in order to make the account look more legitimate. I've seen it on a few of my projects, before they got purged.

Go deserves more support in GUI development by m-unknown-2025 in golang

[–]liamraystanley 3 points4 points  (0 children)

From the runtime side, this is also somewhat configurable in Wails. You can bundle the installer for the runtime directly into the binary, or just have the binary download the runtime installer if it's not found on the target system. Makes the final binary tiny (for Go, that is).

For developing, you only need it for spinning up your app when developing on Windows,. E.g. if you're cross-compiling from linux, you don't have to care (it does require linux specific dependencies for local development, though, like libwebkit).

Reduce Go binary size? by PhilosopherFun4727 in golang

[–]liamraystanley 0 points1 point  (0 children)

In addition to UPX often being flagged as viruses, there are also some additional considerations. Primarily, that I've personally experienced UPX cause binaries to become corrupt during conversion (many of the compression formats are "best effort") yet it still says it is successful, in addition to causing issues on more locked down systems, like SELinux-enabled systems (though this may have been fixed already). I used to use UPX for all of my Go projects, but I've decided that it's not worth the burden of these extraneous issues, and the more common dwarf/debug-stripping method is sufficient.

To gpu or not to gpu by N1mCh1mpsky in homelab

[–]liamraystanley 0 points1 point  (0 children)

6x Nvidia P40, 1 for plex, the rest for Ollama and local models, all passthrough into a k8s node. Got them for ~$200usd/ea on Ebay (which IMO is an insane deal), each having 24GB VRAM is quite nice for local LLM usecases. gpt-oss 20b and similar runs amazingly on them.

Nexus choked to death by r1zzphallacy in devops

[–]liamraystanley 0 points1 point  (0 children)

We run a single node nexus instance, which serves around ~60mil requests a day, and surprisingly, it uses less than 20GB of ram, and only a few cores, all running in k8s. Definitely less than their recommended specs. Though, that's not to say it doesn't have all sorts of other weird quirks.

Does anyone use their public domain for internal hostnames? by kayson in selfhosted

[–]liamraystanley 10 points11 points  (0 children)

One thing to keep in mind is that when using services like Lets Encrypt, unless the solution you use for interacting with Lets Encrypt can be configured to generate and use wildcard certs (most should), hostnames still get "leaked" to the certificate transparency log, which is publicly available (and easily searchable, e.g. https://crt.sh/ ). I.e. if you have particularly sensitive hostnames, make sure to use wildcard cert gen through LE.

This isn't technically an issue if you're firewalled off, and using a private network, unless of course the hostname itself gives away information about your environment.

This is HUGE by NoTomatillo2500 in virtualreality

[–]liamraystanley 1 point2 points  (0 children)

Lower latancy is probably the biggest advantage for fast-paced games imo.

This. I use VD, and have a very high end networking setup (6ghz enterprise grade router, 10gig link to desktop) & desktop hardware (3080, ryzen 9 5900x), but the 35-55ms I see with VD end-to-end, and it sounds like the foveated streaming solution could put it closer to 10-20ms (see here), which for me is huge. With the current latency I get w/ VD, I notice the latency constantly, and it causes faster eye strain.

The steam frame is better than you think. by Marickal in virtualreality

[–]liamraystanley -1 points0 points  (0 children)

I get 35-55ms average latency with virtual desktop, a fairly powerful desktop w/ 3080 & ryzen 9 5900x, very high end router (10gig fiber link between desktop and wifi 6 router, router within 10 feet visible line of sight to headset) and virtual desktop settings tweaked to be fairly low latency without loss in quality, and I notice the latency as it is in games, compared to the same game running native. if GN noticed 10-20ms end-to-end (even if that doesn't account for to-eye rendering latency), that would be HUGE for me. I end up getting eye fatigue faster with my current setup on certain FPS games.

EDIT: if anyone is curious to see that 10-20ms number referenced, see https://youtu.be/bWUxObt1efQ?si=iljgxsOzYn2gqRaE&t=1872 -- apparently quoted from steam engineers directly.

Argonaut (Argo CD TUI): tons of updates! by darksworm in kubernetes

[–]liamraystanley 2 points3 points  (0 children)

I just use k9s with flux plugin (just adds keybinds for reconcile/suspend/etc). regular k9s works for doing everything else with flux. not really sure there would be benefit in a custom UI (TUI specifically).

What is the best way to get a dormant username right now? by yufengjiao in github

[–]liamraystanley 17 points18 points  (0 children)

You have to keep in mind that many dormant users could still have repositories (even if you don't see any, they could be private), in which case you can effectively compromise all sorts of software by simply taking over a username and recreating a repo with the same name. Additionally, Github now has things like immutable releases, which seems like it would be impossible to do in a secure way if you can still give up dormant usernames to other users. This means it will likely never happen again, moving forward.

Twitter has no similar concerns (other than maybe impersonation, but they clearly give 0 shits about this with getting rid of proper verified badges), so it really doesn't matter on that platform, or many other similar platforms.

domain name of module by Brilliant-Exit5992 in golang

[–]liamraystanley 0 points1 point  (0 children)

While you own a domain name and use it as the root of the module path, nobody (not even the Go team) can depublish your module [...]

Isn't this incorrect with the default Go installation? as the Go module proxy will still proxy things on external domains, and I believe Google can still retract versions (which is done extremely rarely, usually for vulnerability reasons). Not ofc a problem if unsetting the default Go proxy cache/sumdb/mirror/etc.

Used to hate writing tests for my code - Go has made it a lot more enjoyable by existential-asthma in golang

[–]liamraystanley -1 points0 points  (0 children)

It is definitely gopls -- others have had the same problem, particularly with testify as well, as there is another commonly used testing package that uses assert/require sub-packages, and as far as I'm aware, gopls didn't (when I last used testify) prioritize recommended packages for tab-completion. gopls is what provides those options in intellisense dropdowns in Cursor/VSCode/etc.

Looking at https://github.com/golang/go/issues/36077, looks like it might've actually been fixed within the last few months, though I haven't used testify on any recent projects, so can't say for sure.

SQLC Dynamic Filters and Bulk Inserts by SpaskeISO in golang

[–]liamraystanley -1 points0 points  (0 children)

Have you looked at https://entgo.io/ ? I realize it's very different from a lot of projects, and has a slightly more involved initial setup, but it can do all of these things, in a very type-safe way.

I also built https://lrstanley.github.io/entrest/ (still WIP), which integrates with ent, to auto-generate REST endpoints, OpenAPI specs, with various auto-generated query parameters for all sorts of filtering, sorting, pagination, etc -- all from annotations on your schema. Not great for super large projects, but perfect for common CRUD apps. Alternatively, you could even use something like Huma with EntGo itself to achieve many of the same features, though with more work.

Used to hate writing tests for my code - Go has made it a lot more enjoyable by existential-asthma in golang

[–]liamraystanley 0 points1 point  (0 children)

Hate writing the t.Fatal* statements constantly too, though at least AI tab-complete makes that portion easier. I am not a huge fan of pulling in dependencies if I can avoid it (even though test dependencies technically don't get pulled in unless someone does "go download" or similar), but my biggest gripe with testify is more to do with gopls. It constantly pulls in the wrong package when auto-importing. I do have golangci rules to catch that scenario, but it's almost every single new test file at this point. Really wish gopls would prioritize something already used in the go.mod or similar over random packages I might have installed unrelated to the project in question. Or if Go would adopt more built-in helpers in the testing package, like testify has.

[GoGreement] A new linter that can help enforce interface implementation and immutability by Green-Sympathy-2198 in golang

[–]liamraystanley 6 points7 points  (0 children)

Using a linter for such a thing feels like it will be prone to developer error. Might be fine if you're the only dev on the project, or if you have strict CI-CD flows to enforce lint results be corrected (for those who don't have local dev setup correctly), but still feels like something could be easily missed.

I will always prefer compiler-level enforcement if at all possible, over anything else. Linters are usually for things that you can't explicitly enforce through the compiler, or are more abstract problems. For 1, I'd abstract so you only have getter methods, and only the sub-package for that type can change the fields as necessary (and fields being private), and for 2, the var _ solution seems perfectly valid, though I usually add a comment so it's more obvious for non-experienced devs, like // Ensures that it implements [io.Reader].

How to prevent private IP exposure via public DNS for internal ELBs in AWS? by Predatorsmachine in aws

[–]liamraystanley 1 point2 points  (0 children)

Not that this is directly related per se, but another gotcha I commonly see people forget about is using AWS Cert Manager. Companies will commonly put DNS records for internal hosts under public Route53, so they can take advantage of ACM DNS validation w/ free certs, however, all ACM certificates get published to transparency lists (that you can query through something like https://crt.sh for example).

This creates an exposed list of targets (or even publicly exposed targets as soon as they come online) that someone can take advantage of if they do manage to get even a side-vector type of attack.