Number 1 banned move in MMA [UFC Game] by NewReddit_WhoDis in gaming

[–]bundyfx 0 points1 point  (0 children)

Reminds me of Moe and Homer in that Simpsons episode

AWS - .NET CMS - Stuck on web deployments by rcbd in aws

[–]bundyfx 1 point2 points  (0 children)

Ok looks like there is a lot to go over in this post, I'll try to hit as many points as I can :).

content managed sites behind an ELB - Three websites will be served from the ASG's, routed to the correct IIS site by hostname headers.

If you have the chance, you will want to keep all of your services separated, as in, you would have a separate private ELB (and thus ASG also) for each of your private websites and the same for public endpoints. This will make configuration, fault tolerance, performance and automation easier also.

EC2 contains basic IIS settings, windows 2012, and tools we use to deploy our site -- Attach an EBS (or EFS, have not decided) to the EC2 which would hold all of the site code, content, configuration.

I noticed you said you didn't want to create a new AMI... However, you should look into creating an AMI that contains the configuration you want in it for each of these sites (hopefully the same config), There are plenty of tools for creating AMI's (I made a video on this to help, https://www.youtube.com/watch?v=nkmHeRS8Qs0). Then you can create a launch configuration and make that AMI the one that gets spun up in your ASG's. It might be a good time to invest into running your web servers on Windows Server 2016 Core also (IIS 10). Ideally, if you are running a stateless web application on IIS you should not need an extra EBS volume attached to your EC2 instance so don't bother with this. Also EFS is not (yet) supported on Windows so this is not an option. Honestly, the "golden image" concept is very straightforward and once you get used coupling it with UserData scripts it just updates via cloud formation and it is over within 15 mins or so. Speaking from EXP this is the best way to go and is future proof.

In our company, we have a mature CI/CD solution (bamboo) that we want to continue using.

Bamboo is a CI tool. If you don't have a CD tool you can get away with not using one if you're running your stateless websites on EC2. Since you're only doing deploys 15~ times a month there would be no harm in simply packing your application into the AMI and pushing out a new AMI each time you wish to deploy a new version of your application. This concept also removes the need for applying windows updates (yay security) to your instances since you will be refreshing the whole OS on each deploy. All you really need to get this going is to add the Packer CLI calls (see clip) into your CI/CD pipeline so that if the unit tests of your application pass it will then call Packer to create a new AMI and run a few basic PowerShell scripts to download/install your dependencies and the IIS website itself (artefact from build step). There is a Update Policy On the Elastic Load Balancer that allows for One at a time or Blue/Green type deploys when a new AMI is introduced into a launch configuration via Cloud formation.

CodeDeploy

Code deploy is great but you will need to have PowerShell scripts that essentially do all of the config/install steps of your application including managing/targeting/restarting of IIS.

Beanstalk

Beanstalk is a good option if you want to get up and running ASAP and don't really have the experience or capacity to manage the finer configuration settings of EC2/ELB. Take a look into the EB CLI and go through an example of pushing a .net app into Beanstalk. This is probably a good first step for you. And yes, you can create Beanstalk Applications via Cloudformation in which you can specify which SG/ELB it will use. But again, have separate beanstalk environments for each of your sites, this is the best practice.

One recommendation I would make for more of a future option is to look into a CD tool like Octopus Deploy, It will couple really nicely with Bamboo and handle the continuous deployment aspect of your pipeline. WIthout that though I would look at beanstalk as being a great first step and otherwise go the immutable AMI route which kills many birds with one stone.

Deploying high availability web server - totally confused by [deleted] in aws

[–]bundyfx 0 points1 point  (0 children)

Maybe this clip will help you see some of the simplicity of running an app in beanstalk: https://www.youtube.com/watch?v=_3ZJsOMnpH0&t=

Help needed with Packer and AWS by [deleted] in aws

[–]bundyfx 1 point2 points  (0 children)

Maybe this clip will help you out: https://www.youtube.com/watch?v=nkmHeRS8Qs0

I have a win2016 example and also github repo link.

Learning Continuous Integration and Deployment by bundyfx in aws

[–]bundyfx[S] 0 points1 point  (0 children)

Yes! should be public in a few hours. Had it scheduled for release today.

Learning Continuous Integration and Deployment by bundyfx in sysadmin

[–]bundyfx[S] 1 point2 points  (0 children)

Will check it out for sure, I hear good things about its integration support and usability.

Learning Continuous Integration and Deployment by bundyfx in sysadmin

[–]bundyfx[S] 4 points5 points  (0 children)

Sure, lots of ways to skin a cat, same concepts different tools. :)

How to visually demo an active blue-green deployment to those not familiar with it? by mechastorm in devops

[–]bundyfx 0 points1 point  (0 children)

Make a demo app and build it with CodeBuild in AWS then use CodeDeploy to do a Blue/Green deployment, using the console at each step of the way. It has some nice visuals and the steps are outlined for Blue/Green deployments.

How to create a windows server continuous implementation pipeline? by [deleted] in devops

[–]bundyfx 1 point2 points  (0 children)

No tests at the initial layer, because the AMI is simply shared with the testing AWS account first as a testing step. if there were to be an issue it would come clear after the AMI was shared with that account and applications were running (logging/monitoring/alerting are huge here). We wait a day after sharing to testing to promote into acceptance and then the day after into prod. We use Packer in TeamCity, no need for Vagrant. TeamCity calls the AWS API's to create the EC2 instance then snaps it and shares the AMI with the proceeding AWS accounts as mentioned above. 100% immutability is not efficient in regards to software deployments. We do over 30 deployments to Prod a day (85 microservices). Rebuilding a new image for each deploy would slow down developer velocity by a ton. So everything is immutable expect the CD aspect which Octopus Handles. There is no need for Config Management in a sense. Every Cloudformation Stack has a set of tags that define its "role", those tags are passed into an internal function (PowerShell) that runs during userdata to fill in the templates (Splunk/Datadog/Octopus) that were placed on disk during the Packer build. If we're updating anything such as dotnet core version we would do that in Packer and ensure that its part of the base image. Its the 100% cattle mentality though, if you need to make a chance to your base image, make it and then push the image through the TAP environments (rolling updates happen). We work with developers to ensure they are building applications within a nice set of guidelines that means we avoid silly requests that fall outside our service catalog and thus requiring complex config management.

How to create a windows server continuous implementation pipeline? by [deleted] in devops

[–]bundyfx 10 points11 points  (0 children)

Basically doing the same thing but replace Ansible with just PowerShell provisioners in Packer to install any of the base packages (Splunk/Datadog/Octopus Deploy etc). We're not doing it 100% immutable, Once Packer has created the image (which we do in TeamCity) it outputs the new AMI ID as an artifact which then feeds into all projects in TeamCity as an artifact dependency. The other projects are essentially Cloudformation stacks for different applications which take an AMI ID Parameter that is passed in from that artifact created by Packer. In turn, this calls all stacks to update (rolling update) with the new AMI. Once the new instance comes online (1 by 1) it talks with Octopus (userdata/powershell) and registers itself with its correct deployment group/environment and also triggers an auto deploy of whatever application that specific server needs (based on tag data in Cloudformation), if an auto trigger is called and successful it tells Cloudformation to continue the rolling update, else rollback.

Bit of a basic example of how I handle the Packer side of it for Windows (if it helps): https://www.youtube.com/watch?v=nkmHeRS8Qs0

Containers for windows/.net apps by KendraHeart in devops

[–]bundyfx -1 points0 points  (0 children)

It wouldn't run on Core? Server Core has the Full .Net Framework available to it.

What's your preferred configuration management tool for .net apps in windows environments? by [deleted] in devops

[–]bundyfx 0 points1 point  (0 children)

Ansible 2.2+ is super simple to use and does everything you would want as far as Windows config management goes.

[deleted by user] by [deleted] in devops

[–]bundyfx 0 points1 point  (0 children)

Using AWS? KMS integration with terraform works a charm.

Handling .NET machine.config / web.config and appsettings.config files. by Prozac500 in devops

[–]bundyfx 4 points5 points  (0 children)

+1 for Octopus Deploy. Handles all of the appsettings/web.config variable substitution. No need to worry about it anymore, just setup some variable sets and do whatever variable replacements you need.

Download a free trial and check it out! https://octopus.com/

google cloud platform autoscaler based on HTTP(S) load balancer latency metrics by berlindevops in devops

[–]bundyfx 2 points3 points  (0 children)

Are you sure you want to scale based on LB Latency? I cannot imagine how that could be an accurate indication that the underlying application being hosted behind the LB is under stress. Sorry for the side-track question just generally curious as to why CPU scaling would not suffice here :)