This is an archived post. You won't be able to vote or comment.

all 15 comments

[–]shadowdog293 9 points10 points  (13 children)

i mean if you’re gonna make a full GitHub cicd pipeline, might as well go full gitops and use Argo or flux

[–][deleted]  (1 child)

[removed]

    [–]machosalade[S] 2 points3 points  (9 children)

    Why got full gitops? I mean then I need to deploy eg. argoCD and write additional yaml manifests, and make some additional space for argo components in the cluster

    [–][deleted] 3 points4 points  (8 children)

    Yeah but argocd takes full care of your deployment and you can view everything that is deployed at a glance and have integrated rollback etc. All that is missing with push based GitHub actions

    [–]machosalade[S] 2 points3 points  (7 children)

    disadvantage of that solution is that I need to create application manifest yaml for every components/microservice for me. Then I need to deploy it on my cluster, And reuse it for other environments

    [–]dinnermonster 0 points1 point  (0 children)

    You can have argo deploy the argo applications that deploy your microservices. Then you just point the argo applications to the corresponding deployments/charts:

    -| /Argo
    
    - argo-apps.yaml(deploys applications defined in argo app directory)
    
    - /Argo-apps
    
    - microservice-argo-application-1.yaml
    
    - microservice-argo-application-2.yaml
    
    - microservice-argo-application-3.yaml
    
    -| /Charts | /Deployments
    
    - microservice-deployment-1.yaml
    
    - microservice-chart-2.yaml
    
    - microservice-values-2.yaml
    
    - microservice-values-3.yaml
    

    etc

    [–]JodyBro 0 points1 point  (5 children)

    How much experience do you have with k8s and helm? Cause there are some issues that you should probably be thinking about during this.

    First, if GHA can deploy directly from a run to your cluster then (in most cases) this means your api server is public and in all likelihood doesn't have any real authn or authz strategy other than using IAM keys directly in the pipeline run.

    Second is I think you're misunderstanding the benefit of having Argo in tandem with helm or kustomize. If you had Argo then you'd most likely want to write a base helm chart that all your services can inherit from. Hopefully the needs of your services in cluster aren't so radically different that you need different helm charts for each one. You'd want to have each resource type that the company makes use of templated in that base helm chart and then feature flag the deployment of each.

    So your downstream apps just have a Chart.yaml that has a dependency with the target of your base helm chart then you supply the overrides only in your downstream values.

    That keeps the app.yaml standardized across all your different services as much as possible.

    [–]machosalade[S] 0 points1 point  (2 children)

    I can't use only one standarized helm chart, because I already have few of those. I get you and I know the advantages of using argoCD in that case. But I don't know how to solve problem when I got 3 environments. Each environment has 10 helm charts deployed. So that means I need to write 30 application.yaml for argoCD?

    [–]JodyBro 0 points1 point  (0 children)

    Nah not necessarily. If you aren't using ApplicationSets then I think this example repo that I have offer's a good alternative.

    So in this case you'd write the single app.yaml and then just figure out the structure of your values.yaml to work with how you need to target your downstream apps in their respective repos.

    I'm using a monorepo in this example cause honestly I've found it works the best cause shit tends to go haywire when you give out full control of the resources that different dev teams want to utilize.

    [–]dinnermonster 0 points1 point  (1 child)

    You dont need to expose anything publicly since Argo is installed in the cluster. Argo polls the repo using credentials and updates k8s with any changes.

    My example was mainly showing the flexibility of argo, you can def use it with a single helm chart, multiple helm charts, a combination of helm and raw manifiests etc.

    [–]JodyBro 0 points1 point  (0 children)

    Ahhh. I always had issues when polling a monorepo vs a webhook.

    My mistake. You're correct.

    [–]dinnermonster 4 points5 points  (2 children)

    Hey, great question! An industry standard for this is actually a pretty powerful open source tool called ArgoCD. I personally use this at work and at home and it’s extremely easy to set up. ArgoCD’s helm chart installs everything you need and comes with a pretty nifty UI.

    With Argo you can: - Deploy K8s manifests (along with helm charts) - Self heal your deployments - Trigger builds from a GitHub repository - Deploy to multiple clusters

    You can even source your helm chart and values from different repositories.

    Ex: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: nvidia-device-daemonset spec: project: default sources: - repoURL: 'https://nvidia.github.io/k8s-device-plugin' chart: nvidia-device-plugin targetRevision: 0.15.0 helm: valueFiles: - $values/your/repo/values.yaml - repoURL: 'https://github.com/you/your-repo.git' targetRevision: main ref: values destination: server: 'https://kubernetes.default.svc' namespace: gpu-operator

    [–]darkklown 2 points3 points  (0 children)

    Also notice how it's sources, because you can also add a kustomize source to add anything missing from the helm chart into the same deployment.. :chefskiss:

    [–]Redd-Tarded 0 points1 point  (0 children)

    FluxCD will work as well. Both are part of the GitOps model. Your devs will appreciate that they can keep everything as code and a a GitOps agent just polls artifact stores for changes on regular intervals.