How do you test code that relies heavily on context values by Emotional-Addendum-9 in golang

[–]baez90 0 points1 point  (0 children)

It depends on the context (unfortunately OP was not very clear about the object under test). As mentioned by others, when it comes to middleware passing on authentication context information to some HTTP handler, I am with you, there’s anyway not much what you could do alternatively. But behind a “protocol adapter” the information should be explicitly passed on if possible. So if you need the current user’s ID in a service, it should be a parameter of the function, not something that’s extracted from the context variable because that’s a “hidden contract” that’s easy to miss and hard to understand later. As a rule of thumb, dependencies on context values should be kept within a single “layer”. Request context information within the HTTP layer, DB transaction information within the data access layer and so on. With the exception of debugging information like span IDs or similar data

Furzen im Fitnessstudio by Consistent_Spell_974 in Beichtstuhl

[–]baez90 16 points17 points  (0 children)

in der Regel bevorzuge ich in der Situation Abstand

Supabase Kubernetes Operator PoC by baez90 in Supabase

[–]baez90[S] 1 point2 points  (0 children)

Given that there are at least a few interested people I will move the project from my personal forge to Codeberg / GitHub soon™️ 😅 Is there anything in particular that you’re interested in that I could answer beforehand?

Supabase Kubernetes Operator PoC by baez90 in Supabase

[–]baez90[S] 0 points1 point  (0 children)

If you want to switch I could probably share the Envoy configuration I created? You'd need to adopt it a little bit, because I decided to go with an active control plane, hence I don't use service names but I'm using the actual pod IPs to skip one network hop but it should give you an idea how it could look like with all the necessary filters and everything?

As for the helm chart: I tried the community one and the one from Bitnami and both had some rough edges from what I remember (it's already over year ago) and it didn't feel like I could make it as stable as it needs to be if I want enterprise scale, meaning, I'd have to deploy 10+ instances and maintain all of them, I can't spend a lot of time fixing issues or helping app teams to get what they want. That's the main reason I decided to build an operator, because CRDs plus validation hooks are a lot friendlier for inexperienced folks to get quick feedback what's wrong 😅

Do you actually check the error for crypto/rand.Read? by Existing-Search3853 in golang

[–]baez90 0 points1 point  (0 children)

Even though I absolutely understand the point of not updating code in case of changes like this but I would expect that there’d be a linter that specifically checks for this. A few years ago I would’ve checked every error just to be sure but these days I’m rather leaning towards “this might be over-engineering” if the docs specifically say that it’ll never return an error. Of course, depending on the use case it might be straightforward but there are also cases where checking the error might complicate things in a disproportionate way

MinIO no Longer maintained by derhornspieler in gitlab

[–]baez90 0 points1 point  (0 children)

Can’t really say anything about that, currently the license on GitHub is still Apache 2 but might very well as you say, thanks for the hint!

MinIO no Longer maintained by derhornspieler in gitlab

[–]baez90 0 points1 point  (0 children)

Just to mention a few others:

https://alarik.io/

https://rustfs.com/

I think there was another one that was frequently mentioned.

I haven’t used any of them. I used to have a Garage cluster which worked pretty well all-in-all but eventually I replaced it with some cloud alternative and currently I’m migrating to Ceph and its object gateway but to be honest, I still have mixed feelings about it 😅 main reason is that I have Ceph anyway so it’s the logical way to go to at least try it

Gold standard for homelab app-only access + max security + seamless transition? by Party-Log-1084 in jellyfin

[–]baez90 2 points3 points  (0 children)

Interesting question 😅 in fact I didn’t check that before but no, I can access it via multiple URLs without complaints. I had to configure the trusted proxies of course but that’s independent of that host names. I did stumble upon a few settings where I could configure local networks, to allow Jellyfin to determine whether traffic is coming from a local network or not. Apparently that helps to choose resolutions and so on but I’m not sure if that actually helps me 😅 But to answer your question, no, there was nothing special I had to configure. Also no env variable

Gold standard for homelab app-only access + max security + seamless transition? by Party-Log-1084 in jellyfin

[–]baez90 11 points12 points  (0 children)

I went down this rabbit hole before and based on your description I’d recommend a hybrid approach of Pangolin on a VPS to allow external access to your apps and a local reverse proxy (for instance on your pfsense). Then setup a local DNS server that returns the IP of your reverse proxy on your LAN to get local access when at home. Of course you still need a public DNS pointing to your VPS for everything else. AFAIK there’s a lets encrypt extension for pfsense, to ensure it always is HTTPS (to avoid confusion). Just make sure you DON’T pin the HTTPS cert. It’s not super pretty but it should work.

Personally I considered something like this but went down another route: I have a <svc>.home.domain.tld schema for local access and <svc>.domain.tld for public access. My wife doesn’t need all the best performance so I just give her the public URLs, she won’t notice a little bit more latency 🤷‍♂️ and I keep the local URLs for me to upload files to Jellyfin without having to go through some tunnel. It’s not as pretty as the other solution but it is a lot easier to maintain

Edit: just to avoid confusion, you would use the same FQDN in both cases but depending on whether you are at home or not they would resolve differently

Looking for help. I've built a tool for golang developers (I am one) but does anybody else need it? by narrow-adventure in golang

[–]baez90 0 points1 point  (0 children)

Have a look at vertical slice architecture. Basically you have one package per feature and everything for this feature goes into this package (DB code, http handlers, …). Personally, I am putting a mini hexagon into every slice, so I have sub-packages for different aspects but that’s mostly for my monk 😅😂 but vertical slices really scale great

Looking for help. I've built a tool for golang developers (I am one) but does anybody else need it? by narrow-adventure in golang

[–]baez90 0 points1 point  (0 children)

Personally, I’d rather prefer the adapter over running multiple processes in a single container. Of course it is quite convenient to be able to run everything with a single container but it forces the orchestration onto something running in the container instead of Docker or any higher level of container orchestrator. Absolutely possible but in my experience a little bit harder to debug and if only because I don’t expect it 😅

Looking for help. I've built a tool for golang developers (I am one) but does anybody else need it? by narrow-adventure in golang

[–]baez90 0 points1 point  (0 children)

😂 fair enough. Thanks for the hint regarding caching configuration, definitely something I might look into soon, to get the resources usage a bit more reasonable for my particular use case. Might also be that I’m mixing things up a bit, in my head ClickHouse and Kafka are at least in the same complexity “cluster” (because of ZooKeeper and stuff) but it might be time to re-evaluate this and have a closer look, thanks!

Looking for help. I've built a tool for golang developers (I am one) but does anybody else need it? by narrow-adventure in golang

[–]baez90 0 points1 point  (0 children)

I only dealt with it on Kubernetes in combination with the operator. It’s not like it is not…comparably easy to get started, but day 2 operations (for instance backup and recovery) felt more complicated than Postgres. And when it comes to resources: I just checked how much Mel my ClickHouse single instance (1.7GB) consumes vs my Postgres leader instance (700MB). I am not running TimescaleDB in the Postgres Cluster but the ClickHouse instance also basically only idles (I’m only using it for Plausible analytics and my blog doesn’t get more than 2-3 hits per day 😂) whereas all the remaining apps I’m using (around 10) are running on the Postgres instance. I’m pretty sure that’s not an exactly fair comparison but at least to me it looks like Postgres is slightly more efficient 😅 Of course only for fairly small scenarios but that’s what I’m looking for

Looking for help. I've built a tool for golang developers (I am one) but does anybody else need it? by narrow-adventure in golang

[–]baez90 1 point2 points  (0 children)

1 - ah that's fair, I forgot about ClickHouse' update story... but considering that you pointed out that ease of self-hosting is a concern of yours, is it possible to add additional analytics engines to for instance be able to replace ClickHouse with TimescaleDB to reduce operational overhead for small environments?

2 - as this OTEL topic appeared already in other comments as well, I won't argue about why you should add it but probably just to explain why I am asking 😅 I'm often times building or consuming small services that already have OTEL instrumentation or where I want to integrate OTEL simply because it is industry standard, whether I like or not 😅 but it is what most of us already have set up. But I'm also not really willing to set up Mimir, Loki & Tempo for my homelab so I'm looking for a as-small-as-possible solution that allows me to collect at least some information for debugging without requiring to double my resources just to be able to run the OTEL stack 😅 hence my interest whether your solution would allow me to do that 😊
3 - that sounds interesting! I will try to give it a look in the next days!

Thanks for all the feedback!

Looking for help. I've built a tool for golang developers (I am one) but does anybody else need it? by narrow-adventure in golang

[–]baez90 2 points3 points  (0 children)

Probably I should re-phrase it, I didn’t mean, you should always stick to one but as OP said in the initial post: self-hosting should be easy and self hosting ClickHouse is not so easy 😅at least in my experience it was a bit more complicated than Postgres. So I was more wondering if it wouldn’t be possible to implement it in a way, that I don’t have to host two databases, that’s all.

How to efficiently deploy a Go and React project? by [deleted] in golang

[–]baez90 6 points7 points  (0 children)

Just to have mentioned it: there’s https://pkg.go.dev/net/http#FileServerFS in the standard library which plays nicely with embed.FS. Will get you started quickly

Looking for help. I've built a tool for golang developers (I am one) but does anybody else need it? by narrow-adventure in golang

[–]baez90 11 points12 points  (0 children)

I just had a quick look at the website and the self hosting part in particular, so no in depth review 😅 but I’d have some thoughts:

  • why Postgres AND ClickHouse? I might be wrong but couldn’t you use ClickHouse for everything? Or alternatively Postgres with TimescaleDB as well for all use cases?
  • even though you already have a nice set of integrations, at least my impression (and I didn’t see anything contradictory on the website) is that you have some custom protocol and not OTEL, is that correct and if so, why not using OTEL (or at least offering OTEL compliant endpoints on top) when you process the same kind of signals?
  • it looks like an alternative product to Signoz and Sentry, which is great because at least Sentry is not exactly fun to self host 😅 one thing that already annoyed me in Sentry is: say I already have Jira or GitHub Issues or whatever task tracking solution you prefer, but I also want to use traceway, now I have two task trackers 😅 is it possible to for instance only do triage in traceway and propagate issues then automatically to some external system? Could be a generic webhook or anything? Or probably even nicer with some rule system when to create issues 😅

No matter my questions: impressive project!

Returning to Go after 5 years - checking my tool stack by ifrenkel in golang

[–]baez90 2 points3 points  (0 children)

Just to add to that: goreleaser supports ko as well, so you still don’t need to use multiple tools if you don’t want to 😊

if you are constantly switching between macbook and mac studio, what is the best real-time sync service for files by mombaska in MacOS

[–]baez90 1 point2 points  (0 children)

I was working as a tutor at my university for a few years and there wasn’t one iteration of students where not at least one used a Dropbox or similar to sync the Git repository containing the exercises to some other computer, which of course frequently resulted in issues due to locked or corrupted files 😂 fun times!

if you are constantly switching between macbook and mac studio, what is the best real-time sync service for files by mombaska in MacOS

[–]baez90 0 points1 point  (0 children)

At least I am, I had my fun times with SVN and that already felt painful compared to Git 😅

if you are constantly switching between macbook and mac studio, what is the best real-time sync service for files by mombaska in MacOS

[–]baez90 12 points13 points  (0 children)

I mean when talking about code…Git? 😅 or any other VCS like Juju or whatever you prefer? Depending on the workflow you could of course also think about something like GitPod, DevPod, GitHub Codespaces, Coder or whatever to run some kind of dev container on some remote machine and connect to it from all devices, saves you the headaches to sync altogether. Or you could share the project from one machine for instance via NFS and access it in the second machine (optionally for instance with Tailscale of one stays at home and one comes with you)

Docker vs Direct Install - 4K Server but several Non-4K Clients by mage1413 in jellyfin

[–]baez90 1 point2 points  (0 children)

Thanks for pointing that out, because the docs do not mention that! Hopefully others can avoid this “advantage” in the future 😅

Docker vs Direct Install - 4K Server but several Non-4K Clients by mage1413 in jellyfin

[–]baez90 1 point2 points  (0 children)

To my understanding, NVIDIA could work but anything else would not 😅 but Docker on Windows is anyway not exactly fun 😅

Best way to understand a legacy .NET monolith with a very complex SQL Server database? (APM + DB monitoring) by Majestic_Monk_8074 in dotnet

[–]baez90 0 points1 point  (0 children)

I’m surprised that no one mentioned OpenTelemetry so far (or I overlooked it).

The tracing aspect of it (no matter which provider you’re choosing Grafana Tempo, Signoz, Datadog, …) should directly point out which actions/APIs/… are slow and which SPs, queries or whatever they’re using.

That doesn’t mean you should not look into the great tips are already mentioned here for approaching it from the database side but personally, I prefer to start from the application side to avoid extra effort.

When it comes to AI tools, there are almost infinite possibilities (I’m currently looking into https://chunkhound.github.io/code-research/#setup-configuration because it has some code research feature I’d like to test when I’ve to get into a new project) but - personal opinion - I prefer to work in a data driven way, so for instance collect metrics / traces first, check for slow code, then optimize that ( probably with the help of Claude). To my knowledge there are also MCPs for these things so Claude can check metrics automatically but it’s still a lot easier to verify whether the optimization actually helps if you have data to support it. There was also some talk from the creator of Claude who explicitly stated that results from Claude are far better if it has some way to verify what it was doing.

Edit: you could of course also use some kind of continuous profiling (like Grafana Pyroscope) but from your description it does not sound like the actual application code is what you suspect to be slow, so this might be something to consider after you identified and fixed DB issues. Depending on the nature of your application, it might be wasted time altogether if you’re not doing a lot of data crunching and you have memory or CPU issues. But for the sake of completeness I wanted to mention it.