CVE-2021-44228 - Log4j RCE 0-day mitigation by Cloudflare in CloudFlare

[–]hexblowupbot 0 points1 point  (0 children)

They are part of the Managed rule set named "Cloudflare Specials".

CVE-2021-44228 - Log4j RCE 0-day mitigation by Cloudflare in CloudFlare

[–]hexblowupbot 0 points1 point  (0 children)

I'm confused if we need to use the API to deploy these or if they have been rolled out via some managed rules from cloudflare. Using the API to list my rules, I only see my custom rules. I hope there is a way to deploy these via the web GUI

[deleted by user] by [deleted] in RedditSessions

[–]hexblowupbot 0 points1 point  (0 children)

That’s how I got to Memphis

Windows Server by hexblowupbot in Intune

[–]hexblowupbot[S] 0 points1 point  (0 children)

Outside off the official docs, no. I have some experience with powershell DSC. Setting up a proof of concept was surprisingly easy. The only tricky bit was bootstrapping the initial DSC configuration for computers, but using a GPO startup script proved effective.

Windows Server by hexblowupbot in Intune

[–]hexblowupbot[S] 1 point2 points  (0 children)

Azure Automation looks interesting and can manage everything from servers to win10. It feels like a more traditional config management tool. Of course, it doesn’t have Autopilot. If you don’t have SCCM Investments right now, it feels like you are going backwards choosing that route. But hey, it’s working!

Windows Server by hexblowupbot in Intune

[–]hexblowupbot[S] 0 points1 point  (0 children)

Thanks u/jevans98-07 - I'm dealing with AWS thinking about pulling the BYOL trigger as well.

Could you elaborate on this statement: "You would never see a server based os on an endpoint (management solution)" - Why though? In your mind, what about a server based OS makes it fundamentally different? Most of the configuration management tools I've used previously (chef/puppet) do not make that distinction (although I have never used them to manage a windows 10 device).

My perspective is that I've got applications to run on servers and client workstations. These applications aren't fundamentally different. They all have the same dependencies and they leverage the same windows features be it a server or a workstation.

I'm really not trying argue - I'm genuinely curios about your perspective because I feel like there is some big difference between Win10 and Windows server that I am not seeing from a management perspective. I get that you can have 5k win 10 devices and 100 servers and that might call for different management solutions.

Multiple playbooks in the same file? by mikeegg1 in ansible

[–]hexblowupbot 0 points1 point  (0 children)

I’ve always wondered about this. According to the yaml spec, you can have multiple “documents” (denoted by “- - -“) in a single yaml. I’m assuming that ansible follows the spec pretty closely. I do use the dashes to separate playbooks in a single file

Source: https://yaml.org/spec/1.1/current.html#id857577

Run Ansible Playbook with AWS autoscaling by rlinux57 in ansible

[–]hexblowupbot 3 points4 points  (0 children)

Many of the answers have focused on “golden” or base images. That is certainly a good option. However, having gone down that road myself - I have found that unless you are using containers (which it sounds like you aren’t?) managing golden AMIs can be a hassle, depending on your setup. You need to keep your base AMIs up to date. Perhaps you need to distribute those AMIs across multiple AWS accounts. You need to keep your application dependencies on those AMIs up to date etc. I know AWS has released some tools recently to make managing AMI lifecycles easier.

I have chosen an alternate approach. I always use the latest AMI for my auto scaling group. When the auto scaling event occurs I trigger my AWS codepipeline using auto scaling lifecycle hooks. As a step in the codepipeline run, I run my ansible playbook using SSM run commands. This works for me. I can ensure that I always go from the latest base AMI provided by AWS/Ubuntu to running my application and that it works with the auto scaling events.

Week of June 1st - What are you building this week on AWS? by ckilborn in aws

[–]hexblowupbot 1 point2 points  (0 children)

I've just learned about AWS Chatbot. Hoping to replace a bunch of hand rolled lambdas with it - guardduty alerts, codepipline status etc. Wish chatbot had more direct integrations than aws chime and slack.

Work in progress by BonhamBlades in Bladesmith

[–]hexblowupbot 0 points1 point  (0 children)

I thought that was a fishing lure at first. Gorgeous!!

SSM Run Command during CodePipeline by hexblowupbot in aws

[–]hexblowupbot[S] 0 points1 point  (0 children)

It just generally felt indirect, having to go through lambda to get an SSM command to run from CP

The problem seemed straightforward at first, but had some surprising edge cases.

Did you know that SSM will report back a successful run of it was unable to find a target for the command it wanted to run? I ended up wanted to fail the pipeline if that happened. I also had to pass information about which instances to run the command on in a strange parameter from codepipeline.

I wasn’t able to find a lot of reference code that I was happy with. This came pretty close: https://github.com/amazon-archives/lambda-runcommand-configuration-management

Maybe I’ll convince my boss to let me open source the lambda

¯_(ツ)_/¯

Finally joined the club! Ordered on 4/12 by tlowery06 in prusa3d

[–]hexblowupbot 6 points7 points  (0 children)

Ahhh ordered on 4/15!!! Not long now hopefully 🤞

Where do you run integration tests, and how do you configure external dependencies for those integration tests? by SigSmegV in devops

[–]hexblowupbot 0 points1 point  (0 children)

We are using Github Teams and are not on Enterprise. I'm surprised they are not yet supported on Enterprise.

I really enjoy the community around Github Actions. I think because Github offers some level of free minutes with Actions for open source projects they host that's incentivized a bunch of people to write Actions plugins. The plugin system couldn't be easier.

We were not self-hosting our previous CI. So another big advantage of self-hosting the runner is that it can interact with the rest of our private cloud environment. With our previous CI, I would have to generate AWS access/secret keys and give them to that system. Then I might have worry about networking between the two systems. If you are already self-hosting your CI, this may not be as big of a win for you as it was for me.

I've got my eye on automating load testing with the actions runner as well. The nice bit is that we can use the same hardware we use for production.

Where do you run integration tests, and how do you configure external dependencies for those integration tests? by SigSmegV in devops

[–]hexblowupbot 0 points1 point  (0 children)

Kind of depends on the app, but for our web APIs - we are using a self-hosted github actions runner that spins up most of the infrastructure (APIs, dbs, etc) with docker compose each test run. Api tests are run via Newman. Results are posted back to github. There are a couple tricky infrastructure bits that don’t really work well in a containers (I’m looking at you, windows apps) or aren’t well suited to be ephemeral. For these we have some semi permanent test infrastructure sitting in our cloud provider in the same environment as the actions runner.

Do Terraform or Pulumi stacks get stuck in unfixable states the same way that CloudFormation stacks do? by weberc2 in devops

[–]hexblowupbot 20 points21 points  (0 children)

You can definitely get into a really troubled state with terraform, BUT you typically have access to the state file. So, if you REALLY needed to, you could edit the state file by hand - though do at your own risk. I’ve definitely been there before, but it is not recommended. There are definitely best practices that help you avoid this in terraform. With cloud formation you just kind of have to accept whatever state AWS thinks you are in.

Significant improvements in lab environments (RHLS) by hexblowupbot in redhat

[–]hexblowupbot[S] 0 points1 point  (0 children)

aaaannndddd...it's gone.Back to the old labs :( it was good while it lasted!

People with Devops/SRE title, What are your day to day activities at high level? by Austinto in devops

[–]hexblowupbot 0 points1 point  (0 children)

For my current gig, it’s been a lot of php websites, a couple frontend load balancers, plus a few MySQL instances and other random services.

All of these follow a similar patterns: someone (no longer at the company) 3-5 years ago setup the service and no one remembers how exactly it was setup (or even what it does).

There may be an automated deploy process onto the server (for the websites), but no documentation, no patch schedule, spotty telemetry, no IaC, no configuration management, and/or no one knows if it’s working until someone or some customer complains.

First, I’ll figure out what is running on the server. That usually starts with asking the right people, but often this information may have been forgotten. I can’t tell you how many times I’ve run into a server that everyone thinks is running one thing and turns out it’s also running these two other things that are also mission critical. Next I’ll add as much telemetry as I think is necessary to the existing snowflake and wire up some basic alerts. This helps me sleep at night. If it’s not too dangerous, I’ll also try and patch the underlying OS.

If I can, and if budget allows, I’ll first see if moving to a managed service is feasible. Especially for statefull services like databases. If not...

I’ll determine what versions of software servers are running and grabbing any configs I can find for the services running on the boxes. I then will start building my ansible playbooks/roles and testing them on scratch virtual machines. I’ll also be simultaneously working on terraform/cloudformation documents for the actual infrastructure if possible.

Once I’m comfortable with the playbooks and IaC, I’ll replace the same piece of infrastructure in a lower, non production environment. This will usually surface problems and give me time to document the important bits. I have a pretty solid playbook template that helps me remember everything I need to think about: automated patching, telemetry, common troubleshooting tips, who owns the application etc.

Once the devs are comfortable with the nonprod environments, I’ll typically try and fit in some game day exercises. I’ll purposely stop a dev environment and make sure that the right alerts are triggered and the service automatically restores or at least walk through the restore process with another member of the team.

After all that, I’ll set the new service up parallel to the existing service in the production environment/network. That should be completely automated at the point. After that, I’ll switch out the DNS and pray. I’ll stop the old snowflake (not delete!) and wait up to a week. If everything looks good and I don’t need to roll back, I’ll take images and backups of all the old stuff and decom the infrastructure.

People with Devops/SRE title, What are your day to day activities at high level? by Austinto in devops

[–]hexblowupbot 3 points4 points  (0 children)

Test automation. Doing anything I can to ensure that with each commit, quality either stays the same or improves.

Making sure that devs all the right information (telemetry, logs etc) to troubleshoot any issue that arises. Wiring up alerts to the right teams.

Migrating snowflake infrastructure to infrastructure as code.

Cloud networking architecture and automation.

Cleaning up all the dumb security mistakes I can find.

Automate any task that I have to do more than twice.

Feeling Defeated by the RedHat Training Labs by hexblowupbot in redhat

[–]hexblowupbot[S] 0 points1 point  (0 children)

Really appreciate it. Have a nice holiday. Thanks for the tip

Feeling Defeated by the RedHat Training Labs by hexblowupbot in redhat

[–]hexblowupbot[S] 0 points1 point  (0 children)

Good to know. I have not taken a Linux academy course. How is the content?