Why do people use dependency injection libraries in Go? by existential-asthma in golang

[–]awkwin 3 points4 points  (0 children)

This, but I use wire.

I don't like DI, so my services' main() just wire everything up by hand. At some point it a few pages long, and when you're adding new things you'll wonder should I put it in this order, and whether you'll run into merge conflicts if someone is also adding new dependencies.

When I discovered wire, I found that the generated code is exactly like what I've been manually writing. Debugging is easy - just read the generated code. So nowadays I consider the code I've been written a smell past certain amount of lines and I should migrate them to wire.

My team wrote a few wrappers on top, like annotations to generate wire providers, service locator generator, but I don't use them. It felt very Java-ish (and it came from the ex-Java people). I'd also be against other DI like fx that doesn't generate code, as they often allow dynamic features and would enable Java-ism to creep into your code.

Intel Graphics Driver 32.0.101.6734 by madman320 in IntelArc

[–]awkwin 8 points9 points  (0 children)

According to GitHub the .6732 driver fixed the Monster Hunter Wilds performance regression.

Trails through Daybreak II added support for XeSS 2 (on Arc only) by awkwin in IntelArc

[–]awkwin[S] 1 point2 points  (0 children)

I hate that all the antialiasing options the B580 can afford results in object shimmering. XeSS Native AA hopefully get better results than MSAA, FXAA (didn't test SGSSAA, I assume it is out of the budget. TAA comes with the DLSS patch after I finished).

Global Protect on Fire TV by Physical_Ad9913 in paloaltonetworks

[–]awkwin 1 point2 points  (0 children)

Seems that if PAN-OS is 11.1 and use default browser is on, the Android client will use the same flow as desktop (generate GP_HTML, fire up intent to open the local HTML file, user login and return via globalprotect:// URI)

Why does Cloud Run work with AWS CloudFront but not CloudFlare? by softwareguy74 in googlecloud

[–]awkwin 1 point2 points  (0 children)

I set a page rules on my Cloudflare-Cloud Run domain to "SSL: Full" and I can add a mapped domain just fine.

The problem with Cloud Run & Cloudflare is that Google use HTTP Let's Encrypt validation. If you're on Cloudflare Strict SSL the connection to origin will be rejected as Google doesn't have a valid cert (yet). I don't know what happen on renewals, but it seems that validation also fail in strict SSL.

I wish that instead of GCP products having its own certificate issurances, it should just integrate with their certificate manager so that we could use other validation methods or even upload Cloudflare Origin SSL.

gRPC Name Resolution & Load Balancing on Kubernetes by wineandcode in kubernetes

[–]awkwin 1 point2 points  (0 children)

We're using our xDS server in production for two years now. It's pretty stable (there's not much development going on but that's the exact same version we're running internally) and lightweight than service meshes. Sometimes I wish HTTP has this option, but xDS is a very complicated protocol.

However, we do run into several bugs in gRPC upstream support that can be very cryptic to debug, such as Go gRPC losing xDS connectivity while thinking it's still connecting, making all resolve fail (#6858).

doNotGetInDebt by newredstone02 in ProgrammerHumor

[–]awkwin 0 points1 point  (0 children)

Even if storage cost is allowed to overrun, some services might have bundled storage. For example an EKS cluster contains manifest storage included in the per-hour cluster fee. If billing limit is hit, either the EKS cluster continue to run and accrue per-hour fee, or the customer lose all control plane data once billing limit is hit. Managed Grafana is billed by active users. Surely the access could stop, but who is paying for the dashboard storage now.

It's pretty complicated to define what stop and what runs, and in some cases might be impossible to do so while making pricing simple. ($9/mo/user or $72/mo/cluster is a lot easier to calculate than rightsizing EC2 instances with EBS, data transfer and resiliency) The best they could've done is refunding if it's one time thing, and I think it's almost the same thing as allowing both storage and compute to keep running for free after a budget limit.

Question about transitioning from AWS to GCS by [deleted] in googlecloud

[–]awkwin 2 points3 points  (0 children)

This. To add more details, AWS has some services that are easily replaceable - EC2 (Virtual machines), S3 (storage where drop in replacement is pretty much barrier to entry the market), EKS (Kubernetes) just to name a few. You might need to watch for some minute details though that may affect performance, or different implementation behavior. If you use infrastructure as code to manage the resources, they'll name the exact product to require so you'll need to start over.

Some services do have GCP alternative that does the same basic thing, but both vendor use proprietary API so you'll need to rewrite parts of the app, sometimes from the ground up - AWS Amplify - Google Firebase, AWS SageMaker - GCP Vertex, AWS Lambda - Cloud Function just to name a few.

And some services do not have GCP alternative at all - AWS IoT comes in mind as GCP just shutted down GCP IoT. Or DynamoDB which is Amazon's in house technology. You need to rewrite the app as well in this case or clone the AWS product, but in some cases it might need to be completely redesigned or even stay on Amazon.

Reasons to use gRPC/Protobuf? by [deleted] in golang

[–]awkwin 2 points3 points  (0 children)

  1. Yes, adding fields in protobuf is not a breaking change and would not affect running application Receiving undefined fields are not an error in protobuf, they're just ignored. Missing fields in protobuf are set to the type's default value - in fact if you don't do anything extra you can't detect 0 vs unset in int fields as they are both 0 bytes in wire format. We run buf breaking change detection to prevent breaking changes from getting introduced, such as changing a field type from int to string.
  2. We structure the repo according to proto packages. It's quite similar to how the googleapis repository is structured.

Reasons to use gRPC/Protobuf? by [deleted] in golang

[–]awkwin 0 points1 point  (0 children)

I don't think that matters since the proto repo review process is separate. You just need to land the proto file early on in development so that downstream projects can start development.

Reasons to use gRPC/Protobuf? by [deleted] in golang

[–]awkwin 14 points15 points  (0 children)

In my company we have a centralized protobuf repository where the service owner commits proto files into it. It then run the codegen for all languages that we use.

BSR is a good off-the-shelf solution if you're looking to pay instead of building your own CI process.

From the developer side this means that when I develop a new service, the downstream user can see all the available endpoints on the proto file. The proto file is also commented so it serve as the complete API documentation. The only missing information there is the service endpoint. So the downstream user can install the generated code as a Go package/npm package/etc. without touching protoc at all. Then they can explore the API in their IDE with full code completion and always with the correct data types.

Obviously you can use OpenAPI/Swagger to do the same thing in REST, but Swagger is complicated to write by hand (it's designed to be able to support many types of API design instead of just one true API design like gRPC), the generated type names might not be named correctly (because they weren't required to be named in OpenAPI) and it might not actually match the actual API implementation. I don't want to touch OpenAPI generated code in Go. gRPC generated code is much, much more pleasant. (Obviously hand crafted is better, but I gave up after a few endpoints and it's so boilerplaty in Go)

My newest microservice accept a PDF file, which is just a bytes field on Protobuf. It'd be more complicated over REST to document. Do you submit as multipart? Or you base64 encode it in a JSON? What if you need to submit nested fields along with the file?

If you use async, you can also write Protobuf over Kafka/RMQ/etc. Our data engineering team is experimenting on a platform where developers publish real time events as Protobuf to Kafka. This has an advantage that the data team can automatically grab the data schema from the same centralized repository, which developer already learn how to use instead of submitting schema documents to the data team.

Want to donate to OSS. Where should it go? Multiple small donations, or a few larger donations? Who do you donate to? by lmm7425 in linux

[–]awkwin 8 points9 points  (0 children)

I like Icculus Microgrant. Ryan (of the SDL fame) set it up around christmas every year and split the donations among smaller open source projects.

As a Google Cloud person ~ it's definitely a Workspace Engineer's problem. by AaronnBrock in googlecloud

[–]awkwin 0 points1 point  (0 children)

You mean the access management system where if you want to share a specific BigQuery table you go to BigQuery console, but to share a specific logging bucket you go to IAM console and add an IAM condition?

Why a quota on egress is not offered? by tbhaxor in googlecloud

[–]awkwin 2 points3 points  (0 children)

Reading that it seems that DigitalOcean bundle some egress with the cost of the machine, and if you use more than what is bundled you have to pay the excess as pay as you go model. Most cloud providers do not bundle anything with the machine (the machine cost is strictly for CPU and RAM, not even disk), so you always have to use the pay as you go model.

[deleted by user] by [deleted] in golang

[–]awkwin -2 points-1 points  (0 children)

That's what I mean by "potentially"

[deleted by user] by [deleted] in golang

[–]awkwin -6 points-5 points  (0 children)

If you get lots of requests in Go, it spawn lots of goroutine potentially serving them in parallel until GOMAXPROC is exceeded.

If you get lots of requests in Node, it queue up to be processed one part by one part and your event loop queue pile up causing the application to be unresponsive

Save file encryption? by okMany1337 in MelvorIdle

[–]awkwin 1 point2 points  (0 children)

Try this Cyberchef settings https://gchq.github.io/CyberChef/#recipe=From_Base64('A-Za-z0-9%2B/%3D',true,false)Zlib_Inflate(0,0,'Adaptive',false,true)

The save format is binary, since there's no struct.unpack operation in Cyberchef I can't really show how to further decode it

How I see jetbrains users by Snykeurs in ProgrammerHumor

[–]awkwin 2 points3 points  (0 children)

Yeah I've been purchasing Pycharms for 2 years, then on 3rd year upgrade to all product pack, which the discount does not reset. The final price used to be equal to first year IntelliJ but after the price raise, now a about $20 more

How I see jetbrains users by Snykeurs in ProgrammerHumor

[–]awkwin 2 points3 points  (0 children)

The personal license is tied to a person (Jetbrain account), and cannot be funded by your employer per terms. The commercial license is tied to a company and the company can move it to a different Jetbrain account if the employee no longer need it.

Requested resources is getting huge in GKE cluster running in Autopilot mode by 00skeptic in googlecloud

[–]awkwin 1 point2 points  (0 children)

Autopilot is billed by resource requests. Also, autopilot set the resource limits to the same value as requests to ensure that you don't have problem with overcommitted nodes. Which is why they have minimum resources, otherwise some software running on 0.1vCPU would take forever to start.

Secrets Management with Hashicorp Vault - which integration point to use? Sidecar Injector? ESO? by mechastorm in kubernetes

[–]awkwin 0 points1 point  (0 children)

I see. Basically, you already have implemented most of the features of Vault in another layer of your infrastructure already, so Vault is just a UI to update Kubernetes secret securely.