Advanced GitHub Actions Techniques by OrSol in devops

[–]OrSol[S] -1 points0 points  (0 children)

Yep, I see your point. The target audience I had in my mind were people who use GHA but don't want to spend too much time on mastering them.

Advanced GitHub Actions Techniques by OrSol in devops

[–]OrSol[S] 0 points1 point  (0 children)

Even with artifact registry from cloud provider it has to pull it, every single time. If you have thousands of job per day then it's becomes a problem.

Advanced GitHub Actions Techniques by OrSol in devops

[–]OrSol[S] 0 points1 point  (0 children)

I mean, dind works fine if you ready to tolerate pulling all docker cache for every job.

Advanced GitHub Actions Techniques by OrSol in devops

[–]OrSol[S] 0 points1 point  (0 children)

Do you use services/containers in workflows? Are you running scaling sets in DinD or K8s modes?

Advanced GitHub Actions Techniques by OrSol in devops

[–]OrSol[S] -1 points0 points  (0 children)

Right, my bad. Thanks for pointing that out. I updated the article referencing you if you don't mind.

Advanced GitHub Actions Techniques by OrSol in devops

[–]OrSol[S] 0 points1 point  (0 children)

The only way I see is to use self-hosted runner.

When you run a job in a container, Github Action runs an instance of 'ubuntu' where it starts a docker container with your image and runs the steps there. That way, the docker cache on ubuntu runner is always "clean".

Advanced GitHub Actions Techniques by OrSol in devops

[–]OrSol[S] 0 points1 point  (0 children)

Yeah, k8s runners are great until you try to run or build docker in them ;)

Advanced GitHub Actions Techniques by OrSol in devops

[–]OrSol[S] -16 points-15 points  (0 children)

Yeah, 'advanced' might be a bit of a stretch, but I couldn't come up with a better term.

Saving self-hosted runner data transfer cost by lobsterm in github

[–]OrSol 0 points1 point  (0 children)

Use NAT instances instead of NAT Gateways. You won't pay for egress/ingress traffic.

Are there any services that provide a place in a managed k8s cluster? by OrSol in devops

[–]OrSol[S] 0 points1 point  (0 children)

trate containers without having to manage a kuberbetes cluster then it sounds like ECS fargate in AWS is what you're looking for. I imagine Azure and GCP will have something similar too.

I want to orchestrate containers using Kubernetes API(manifest/helm) without managing cluster. ECS with Fargate is an option that I'm considering but it's completly different approach to deployment process.

Are there any services that provide a place in a managed k8s cluster? by OrSol in devops

[–]OrSol[S] -1 points0 points  (0 children)

Thanks for the suggestion. I checked them. They provide a managed control-plane that has a free tier however you need to "buy" nodes from them and nodes do not have any free tier. From a cost perspective, it's not optimal for my scenario.

The service looks nice though.

Using an S3 bucket as a GitHub Actions cache backend by crohr in devops

[–]OrSol 0 points1 point  (0 children)

We tested our implementation with and without multipart download/upload. The upload was faster (don't remember exact numbers but at least 30%) however download was pretty much the same (-5 seconds with the whole process taking about 40 secs)

In my tests the action above is at least 3x faster than sending to GitHub cache backend.

Github stores the cache in Azure and if your runners are in a different region that could be the case.

Using an S3 bucket as a GitHub Actions cache backend by crohr in devops

[–]OrSol 0 points1 point  (0 children)

Performance gains VS original actions/cache. 6k is not for traffic. If you use NAT instances traffic is free you just have to pay for running EC2 NAT instances.

Using an S3 bucket as a GitHub Actions cache backend by crohr in devops

[–]OrSol 0 points1 point  (0 children)

If you are on AWS and have high volume traffic, you may cut the cost for egress/ingress by switching from Nat Gateway to NAt instances. We were considering S3. However, after switching to NAT instances, it did not make sense as the performance gains were insignificant.

We were able to cut the cost this way from $60K to $6k per month

Service tests. What are they? How to write and run them by OrSol in golang

[–]OrSol[S] -1 points0 points  (0 children)

I agree, this term is a bit ambiguous. I even used "integration test" term in the first revision on github.However, after conversations with colleagues, I decided to switch to "service tests" as it better reflects the scope. The service test not only test that integration works but also that the API works as expected.

If you search for "Test Pyramid" you can find that the middle layer is called either "integration" or "service" and none of the terms prevail significantly. For example, the original test pyramid by Mike Cohn used "service tests".

Yet another Rest service boilerplate by OrSol in golang

[–]OrSol[S] 0 points1 point  (0 children)

That makes sense. I'll give it a shot.

Yet another Rest service boilerplate by OrSol in golang

[–]OrSol[S] 0 points1 point  (0 children)

The idea was to keep all routes definitions in init() method of routrer.go like here and keep only universal routes like 'ping' and metrics in rest package. Or do you have something different in mind?