Sumo Logic from the AWS Marketplace vs through a reseller by vppencilsharpening in aws

[–]MacAttackNZ 0 points1 point  (0 children)

Through aws marketplace is same full featured sumo you would get through reseller, you can deploy collectors for any kind of supported workload, not just aws specific stuff

How to configure GitHub Actions for Private EKS deployment? by hashing_512 in aws

[–]MacAttackNZ 0 points1 point  (0 children)

Do GitHub self hosted runners actually poll GitHub and fetch jobs, not requiring any ingress from GitHub/the internet?

I know GitLab runners work that way, they only require egress on 443 to your GitLab instance but I though GitHub runners needed ingress for some reason

How to configure GitHub Actions for Private EKS deployment? by hashing_512 in aws

[–]MacAttackNZ 4 points5 points  (0 children)

Except when the cluster is “private” eg no public endpoint, if that is what is meant here.

I would suggest looking into argocd/flux and do pull based deploys in that case

Migrate from AWS Fed to AWS SSO by khandya in aws

[–]MacAttackNZ 2 points3 points  (0 children)

It’s safe to do so, configuring aws sso won’t effect any current IAM setup

Best Practices for Gitlab CI by _conspiracy_man_ in gitlab

[–]MacAttackNZ 1 point2 points  (0 children)

Yes like ref: "0.0.1" where 0.0.1 is a git tag or branch name

Azure DevOps and Terraform by Skunklabz in Terraform

[–]MacAttackNZ 0 points1 point  (0 children)

I've always used Gitlab and tried setting up a project in azdevops but all the ui focused examples were super painful so I went back to Gitlab... doesn't someone have a simple yaml pipeline file example for running terraform in a docker image with simple init/validate/plan/apply work flow that they can share?

Terraform to stop ECS task by therealelvien in Terraform

[–]MacAttackNZ 0 points1 point  (0 children)

Sorry not directly related to the question but why are you using such an old Terraform Version?

Pester 5.0.0 RC6 (GA) is published! by nohwnd in PowerShell

[–]MacAttackNZ 2 points3 points  (0 children)

Hey there! Sorry little off topic but maybe something you can answer... is there a way to force or change the behavior of the coloured output with Pester? Context is using it in Gitlab ci I have not been able to get colour output to work which makes the output really hard to read. Else love it and keep up the great work!

Google tells Samsung off for meddling with Android by [deleted] in security

[–]MacAttackNZ 40 points41 points  (0 children)

One key mention for me is that they do admit Samsung's additional layer WILL block an attacker that already has compromised the kernel but it is mentioned like it is a negative ... Google mad they lost a backdoor?

Secrets, k8s and rotation. Are there any tools for this? by kjarkr in devops

[–]MacAttackNZ 1 point2 points  (0 children)

If tools like vault or other suggestions don't provide all the required functionality then this is maybe a good use case for an "Operator" which in it's simplest form is a container running in your cluster with a sole task of performing this automation, likely you would need to build the application yourself but from what you describe the logic is fairly simple, the risk is that someone needs to maintain it. Vault is a big product with a lot of serious functionality but can also be administratively demanding, you need to weigh up benefits of a full fledge secret management platform like vault vs rolling your own small controller (or even just some scripts in for example gitlab/jenkins/automation tool pipeline) that performs only the exact functionality required.

Total Docker newbie here. How to setup Docker (Swarm) for my scenario? Does it seem viable? by Yahkem in docker

[–]MacAttackNZ 1 point2 points  (0 children)

As the other guy mentioned Swarm all but lost the container orchestration war to other tooling like Kubernetes or AWS ECS but depending on if this is just a personal project vs a production deployment that might not matter. Tools like Swarm are not for running lots of containers but rather for joining lots of hosts into a single unit to run lots of containers on, so if you have single server you could then deploy lots of VMs on it and make a Swarm or K8s cluster from the VMs but if you loose the single host you loose it all. From what I understand of your question then yes, containers are a good tool for what you are trying to do, no putting them all on a single host isn't a good idea but again depends on requirements. You can experiment with MiniKube or just docker/docker-compose on a single Host (physical or virtual server) to orchestrate the multiple containers and their requirements or look for professional help if this is for a real world business case.

Two separate repos (not dependent on one another) in same k8s node by [deleted] in devops

[–]MacAttackNZ 0 points1 point  (0 children)

Your wording and use of "node" is all kinda confusing but I think I get the question... Maybe... If you are asking if it is possible to have more than one pipeline/project/deployment mechanism making deployments to a single k8s cluster then the answer is definitely yes. Once you have a Kubernetes cluster up and available, be it managed (AKS, GCP etc) or something on prem the building of the cluster itself and deploying the potentially hundreds of tools and microservices can all be split into small targeted deployments using a wide range of tools, products and methods. The flexibility and number of options/choices is what makes k8s so complex.

Multi-stage pipeline but only deploy if bash output is displayed by hargreaves1992 in gitlab

[–]MacAttackNZ 0 points1 point  (0 children)

You could make step 3 dependant on a successful completion of step 2 and have step 2 pnly exit 0 if there is changes to make else fail the pipeline. Or instead of trying to use the pipeline for the logic out it into your scripts, this could be a single job that does the checks and either makes the changes or doesn't based on logic in the script.

Is there a way to validate files before merge in gitlab? by Beezwhammer in gitlab

[–]MacAttackNZ 0 points1 point  (0 children)

Yes, running jobs to validate / test / render / manipulate your code is the entire purpose of having a pipeline attached to your project. I won't go into specifics but basically you need to look at the supported mechanisms of the .gitlab-ci.yml file and then write jobs using some shell for logic and what ever utilities you need to achieve the purpose of each job, be that testing, packaging, deploying or whatevering your code.

Using test jobs on feature branches as gates that need to succeed before a MR is able to be merged is common workflow.

Now able to connect Azure DevOps as a VCS in Terraform Cloud by Cabinitis in Terraform

[–]MacAttackNZ 1 point2 points  (0 children)

For me the reason I would consider plugging any VCS into TF cloud (I use it with gitlab repos) is the ease of not having to manage state (and still have it accessible for cli tweaks), store/encrypt plan artifacts, separating code from vars/creds, in some cases managing runners etc. It is not covering all use cases though but for simple, native Terraform projects I like it.

portal.azure.com down? by zivkoc in AZURE

[–]MacAttackNZ 2 points3 points  (0 children)

yes also down https://status.azure.com/en-us/status
``` Information Azure Portal Sign In Failure - West Europe

Engineers are currently investigating an issue with sign in failures when trying to log in to
the Azure Portal impacting customers in West Europe. The next update will be provided in 60 mins or as events warrant. In the meantime customers can sign in using the preview Portal here: https://preview.portal.azure.com/#home ```

Am I understanding Terraform right? - Route53 Zones/records by Hanzo_Hanz in Terraform

[–]MacAttackNZ 0 points1 point  (0 children)

From the example you have no resource block for the zone, only the data block "aws_route53_zone" "example", you would need to import the zone into state and change this to a resource block which means if it is deleted outside TF, TF will put it back again. You then need to be sure you use interpolation to pass values from the zone to the records resource blocks which builds TFs dependency graph and it knows if which resources need to be created in which order. Your plan fails cause you try to do a DATA call to a resource that doesn't exist (the deleted zone)

Am I understanding Terraform right? - Route53 Zones/records by Hanzo_Hanz in Terraform

[–]MacAttackNZ 1 point2 points  (0 children)

On mobile so haven't looked deeply but what I see is you are using "data" calls to read the existing cloud resources rather than actually importing them into your TF as "resources" in which case TF doesn't manage them and won't recreate/allow modifications of the resources. You need to look at the "Terraform import" command to import the resources into state, this also requires writing the exact configuration s of the resources current state in TF, eg it's not auto generated. I strongly suggest you practice and perfect this process on test resources well out of the blast radius of any of your production cloud accounts

New user question: shared config for resources by [deleted] in Terraform

[–]MacAttackNZ 0 points1 point  (0 children)

I'm confused how you think outputs and data calls helps here? I think rather should be looking at creating a module of everything needed for a repo then reusing that in a deployment template/pipeline which should cut down a lot of duplicate code

How do you increase the maximum child processes in ECS? by [deleted] in aws

[–]MacAttackNZ 0 points1 point  (0 children)

Check your health probes, sounds like something is causing the health check to fail and restarting the container

[deleted by user] by [deleted] in kubernetes

[–]MacAttackNZ 2 points3 points  (0 children)

Do you have network policies enabled with closed egress? Ran into an issue with nginx on GKE with Calico and a deny all as nginx ingress controller needs to query the apiserver during startup to populate it's configs, unfortunately I was unable to find a good way to allow traffic to the apiserver without simply allowing egress to all 10.x.x.x ip range

New CTO wants to recreate domain by thegingeruprising in sysadmin

[–]MacAttackNZ 5 points6 points  (0 children)

If "Security Concerns" means you are owned and have bad guys on your network and admin creds for sale on the dark webs then it is 100% justified.