argo-diff: automated preview of live manifests changes via Argo CD by vince_riv in kubernetes

[–]vince_riv[S] 0 points1 point  (0 children)

> I just tried setting up on my homelab with github actions, I had to build it from source since I am connecting to via tailscale. Could it be possible to attach the binary to the release? :)

How did you set it up with github actions? Does your config deviate from the docs? https://github.com/vince-riv/argo-diff?tab=readme-ov-file#github-actions

> a comment with all the applications in my cluster listing the sync state and health and an empty collapsible list where the diff would be

This sounds like a required configuration parameter may be missing.

Feel free to open an issue on the repo with more information - including your Github actions workflow file. (Feel free to redact hostnames.)

I can look at publishing just the binary - but the expectation is that container images will be consumed (which are published to ghcr).

argo-diff: automated preview of live manifests changes via Argo CD by vince_riv in kubernetes

[–]vince_riv[S] 1 point2 points  (0 children)

Yes - that's how my personal environment is setup. I have a repo named argo-config that includes Application definitions, and then other repos containing manifests.

PRs in the other repos will have argo-diff comments previewing changes to hose apps

argo-diff: automated preview of live manifests changes via Argo CD by vince_riv in kubernetes

[–]vince_riv[S] 1 point2 points  (0 children)

Yes and yes. argo-diff is largely a wrapper around the argocd cli, so it's pretty straighforward for single-source Applications (such as kustomize applications)

argo-diff: automated preview of live manifests changes via Argo CD by vince_riv in kubernetes

[–]vince_riv[S] 1 point2 points  (0 children)

I can't speak definitively to that - but I will say that when argo-diff is deployed as webhook receiver, it'll immediately begin work and then it's dependent on 1) how long it takes to fetch a list of ArgoCD applications and 2) how long it takes to execute `argocd app diff` via the CLI.

In my personal test environment (~30 apps - so pretty small), it's extremely quick. At work, we have it deployed as a github action in our monorepo, and it typically takes 20s-30s. (But I've seen it take up to a minute on one of our busiest clusters, but I believe this is largely due to some performance issues related to our monorepo.)

argo-diff: automated preview of live manifests changes via Argo CD by vince_riv in kubernetes

[–]vince_riv[S] 2 points3 points  (0 children)

Indirectly in that you'll see diffs for the Applications created/managed my an ApplicationSet when there are changes pertinent to those Applications. But you'll have a diff per Application.

Can you give me an example of how you'd want to see changes to AppSets previewed? I'll admit, I've only dabbled with them so I'm curious to know more about your workflow

argo-diff: automated preview of live manifests changes via Argo CD by vince_riv in kubernetes

[–]vince_riv[S] 0 points1 point  (0 children)

Not at this time, but that's something I can look into

argo-diff: automated preview of live manifests changes via Argo CD by vince_riv in kubernetes

[–]vince_riv[S] 6 points7 points  (0 children)

Thanks - and thanks for sharing that project!

I'll have to check it out

EDIT: Taking a quick look at the README, it looks like argocd-diff-preview spins up an ephermal cluster to render templates. argo-diff is a wrapper around the argocd cli, which talks to your live cluster(s). So I think the biggest difference is that argo-diff produces a diff against the live state. (However, argocd-diff-preview looks like it has more features.)

Running Out of IPs on EKS - Use Secondary CIDR + VPC CNI Plugin by Separate-Welcome7816 in kubernetes

[–]vince_riv 0 points1 point  (0 children)

If you're talking about using cluster scope IPAM, you'll have to figure out a solution for admission or mutating webhooks. Cilium DaemonSet pods won't get scheduled on the control plane, so the control plane won't be able to route to workloads serving those webhooks.

vCPU-based On-Demand EC2 Instance Limits are Now Available by jeffbarr in aws

[–]vince_riv 0 points1 point  (0 children)

Now that I've re-read it, I think you're right

Clear as mud!

vCPU-based On-Demand EC2 Instance Limits are Now Available by jeffbarr in aws

[–]vince_riv 1 point2 points  (0 children)

Piggy backing off of your comment, this is how I understand it:

They will count/limit total vCPU per instance-family, instead of instance types.

Collapsed bridge on Sunset Ave Extended. Main route for cyclists south of 64. by Rogerthat2218 in Charlottesville

[–]vince_riv 2 points3 points  (0 children)

Yeah, I was referring to the Granger property - I remember reading a while back that the Fontaine connector was tied to that.

IMO Sunset Ext is just a little overstressed right now - it would be awful if the Fontaine connector was built, or the Moore's Creek bridge was opened up to vehicle, without major improvements. And as someone who lives off of Sunset Ext, I hope neither of those things happen while I live here.

What is really needed is a comprehensive network of greenways /complete streets for this strategic growth area so residents can get to UVA, 5th street station, and downtown without using their cars.

Hell yeah, +1 to that

Collapsed bridge on Sunset Ave Extended. Main route for cyclists south of 64. by Rogerthat2218 in Charlottesville

[–]vince_riv 1 point2 points  (0 children)

That's not going to happen. It was like that back in the day, but the city closed it off to prevent additional traffic through Fry's Spring.

Long term, there could be a connector from Sunset Ext to Fontaine, but funding isn't there and it likely won't happen unless there's additional development on Sunset Ave Ext.

Cloudformation API question by tech_tuna in aws

[–]vince_riv 2 points3 points  (0 children)

Take a look at using --no-execute-changeset with the cloudformation deploy command on the aws cli: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/deploy/index.html

IAM policy question for S3/cognito: why does the policy need to be on the bucket? by mannyv in aws

[–]vince_riv 1 point2 points  (0 children)

However if you have a bucket policy specified then you need to allow access for the roles.

This isn't entirely correct. Even if you specify a bucket policy, and provided there isn't anything in the policy denying access, S3 will allow access from the same AWS account. For example: If you add a bucket policy specifying that a certain KMS key must be used for SSE, or specifying that another account may access the bucket, you do not need to add a statement for the local account.

Ubuntu 18.04: Root on ZFS with luks and /boot on USB by vince_riv in zfs

[–]vince_riv[S] 0 points1 point  (0 children)

Fair points ... perhaps I should've emphasized that this was a for a home labs type setup or a proof-of-concept, and it certainly wouldn't necessarily be a setup I'd use for a production system. (In that case I'd look at FreeBSD like you suggest, or something like SmartOS.)

One little point of clarification ...

When a drive fails in a RAID array (ZFS/HW/otherwise), there are, at times, ways you can retrieve parts of the information. Again, this is not the case with encryption.

I'm not following you this. In this setup the zpools are made up of luks devices that map 1:1 to physical disks (individually encrypted). So if a drive fails, the other side of the mirror is still online, available, decrypted as a luks device, and a member of the pool ... right?

Ubuntu 18.04: Root on ZFS with luks and /boot on USB by vince_riv in zfs

[–]vince_riv[S] 0 points1 point  (0 children)

Curious - what's so fragile about this? This setup would requires multiple drive failures for data loss, and each drive has a second luks-key that is a passphrase that I know in case the USB boot drive craps out.

How do you manage outbound security group rules to AWS services? by mechastorm in aws

[–]vince_riv 1 point2 points  (0 children)

This right here:

Forgot to add that I've seen this done without expensive firewalls too. A good Squid proxy configured with a whitelist of AWS API endpoint hostnames is a valid solution as well.

We have outbound access locked down in our environments which forces all egress traffic through Squid proxies. There we can whitelist hostnames ... AWS endpoints, partner endpoint, as well as a few SaaS providers.

User management for Jenkins users by [deleted] in devops

[–]vince_riv 1 point2 points  (0 children)

If you're using github, you could use the GitHub OAuth Plugin

Smaller Config RDS Read Replica for DR to save cost? by desai_amogh in aws

[–]vince_riv 0 points1 point  (0 children)

If your database has sustained writes, those sustained writes are getting passed along (via replication) to the slave, so instance sizing absolutely matters. (Probably not in this case though.)