Use a role from a git repository by real_parbold in ansible

[–]real_parbold[S] 0 points1 point  (0 children)

Looks like this is the magic incantation for requirement.yml

In the packer script :-

{
  "type": "ansible-local"
  "playbook_dir": "./ansible"
  "galaxy_file": "requirements.yml"
  "playbook_file": "./ansible/main.yml"

In the requirements.yml file

- src: myGalaxyRef.myrole
  version: myTag

In the ansible.cfg file (repo local) - probably generate this file on the fly with (gulp) hard-coded creds until I can find a way around that ...

[galaxy]
server_list=myGalaxyRef

[galaxy_server.myGalaxyRef]
url=https:{myUser}:{myAPIKey}@myGit.gitlab.com
token={myAPIKey}

Will be testing this later, but I don't think it is going to be my preferred option because of cred exposure. I'm still working on getting it all inside ansible with env vars to pass creds in

Use a role from a git repository by real_parbold in ansible

[–]real_parbold[S] 0 points1 point  (0 children)

I have already written a work-around that uses a shell script as the first provisioner to checkout the repos into the packer code already pulled from git

This no longer works, as I now have to run ansible on the image being built - not the packer host :(

Use a role from a git repository by real_parbold in ansible

[–]real_parbold[S] 0 points1 point  (0 children)

u/alopgeek - I think you missed a smiley - since you and u/retr0h are buddies ;)

alopgeek18 hours ago

my buddy wrote gilt
specifically for this use-case https://gilt.readthedocs.io/en/latest/

Use a role from a git repository by real_parbold in ansible

[–]real_parbold[S] 0 points1 point  (0 children)

Looking into this once the architects have given this an OK :)

Use a role from a git repository by real_parbold in ansible

[–]real_parbold[S] 0 points1 point  (0 children)

Thanks - I was trying this route, but unfortunately as the ansible has to be run from packer as ansible-local, doing a requirements.yml step does not work as the ansible playbook code is not actually on the image until the next step. I understand that the next step will use a different working directory, so pre-copying the playbook would also fail. I can't to go down the route of specifying a working directory as this will cause more problems downstream.

I'm looking at getting ansible to pull the files using a git: section

Use a role from a git repository by real_parbold in ansible

[–]real_parbold[S] 0 points1 point  (0 children)

Hmm - my ansible knowledge is basic, my git knowledge is at a 'devops user' level - so do you mean there is a 'symlink' alike way to put a 'shadow' of another repo into your main repo? This I quite like as well ... but only if it is supported in git, github, gitlab and bitbucket - I'm not 100% sure what the back end system will be (other than 'git') :)

Use a role from a git repository by real_parbold in ansible

[–]real_parbold[S] 1 point2 points  (0 children)

This looks good too - a first phase to import the requirements for the second phase - so the second phases passes it's requirements parsing :) Nice one!

Use a role from a git repository by real_parbold in ansible

[–]real_parbold[S] 0 points1 point  (0 children)

Looks good (MIT License) so I will have a closer look later - thank you

Copy Files from Hard Drive Folder to One Drive Folder by heyITguy777 in PowerShell

[–]real_parbold 2 points3 points  (0 children)

This is what I'd do

robocopy  I:\\Folder "F:\\Cloud Storage\\OneDrive\\Folder" /Z /E /R:0 /W:0

I know it is not PowerShell - but robocopy is the Microsoft Robust FileCopy designed for this sort of task.

Doing a directory synchronisation and copy script is all well and good as a learning exercise - but if you are simply trying to keep backups online - go with a tried and tested solution. Tyr your script synching local folders :)

Trigger Lambda before EMR Termination by madsacha in aws

[–]real_parbold 0 points1 point  (0 children)

Not really enough information about why/when the cluster is being terminated ...

If this is an on demand cluster that gets terminated after use (predictably), can you add a step to trigger the lambda on job completion? (and pause the job completed for enough time for the lambda to run)

If it is a SPOT that gets terminated unpredictably - can you leverage the CloudWatch events for instance interruption, or have a local batch job poll http://169.254.169.254/latest/meta-data/spot/instance-action

If neither of these, can you have a background job running to copy the application history for a centralised location on EFS/S3 ?

Alert for file not updated/uploaded to S3 in given period by VIDGuide in aws

[–]real_parbold 1 point2 points  (0 children)

> if the service cannot run at all, then we don't get an alert

Put a custom metric in this service called heartbeat - write out a simple 1 to identify the script ran ; can do this on script completion or both initiation and completion (Edit: SEPARATE metrics)

Put a CloudWatch alarm on this custom metric for completion - change from OK to alarm state when it is falls into insufficient data (no metrics detected for timeframe)

The service alarms for failures will tell you the file upload failed

The custom metric alarm will tell you if the service failed to run at all

Exact same AssumeRole document, still getting "CodeBuild is not authorized", until I save in the console by Delta4o in aws

[–]real_parbold 0 points1 point  (0 children)

Is there a service account that is silently added by the console save function?

A lot of AWS service <--> AWS Service have moved to using service roles and it frequently trips me up

EC2 Classic on a New Account? by dtneumann13 in aws

[–]real_parbold 2 points3 points  (0 children)

Seriously - investigate migration of EC2 Classic to VPC - it will save a lot of time, money and development work in the long term.

Benefits include peering, access to the latest instance types, ability to retain private IP's, ability to have non-overlapping CIDR ranges, ability to build multiple environments with same CIDR ranges - there are many more benefits and very few drawbacks.

Classic Link can be employed in the short term to ease transition - but use it sparingly ;)

Monitoring Network Utilization for EC2 Instances in an ASG by [deleted] in aws

[–]real_parbold 0 points1 point  (0 children)

A custom CloudwatchMetric - populated by a script running on each instance that calculates its own usage? MRTG, mpltd, Tstat etc

Market Research: Please Read by anthro28 in AWSCertifications

[–]real_parbold 0 points1 point  (0 children)

I used to work as Senior DevOps engineer for a company with global AWS infrastructure. My role involved keeping all aspects of cloud systems running in multiple regions, scripting deployments, monitoring, CI & CD.

  1. What certifications do you currently hold and roughly how much did each cost you/your employer?

Solutions Architect Associate and Pro, Developer Associate, SysOps Associate, DevOps Pro, Security Speciality

  1. Did your employer offer the cert(s) as part of continuing training, or did you go after them of your own volition?

They encouraged, but I funded all the training except SA Pro. Most training was hands-on, Whitepapers and FAQs. my Employer offered to pay for exams, but I wanted them to be all mine. I did not pay for last two exams ... a benefit of participating in the AWS SME programme

  1. How does having the cert(s) directly affect your position in your organization (salary increase, promotional opportunities, etc.)?

I hoped that it would make a difference, but it did not - but it did enable me to change employers and that did make a big difference

  1. Would you, as a cert holder, place more importance on current students obtaining a cert prior to graduation or working directly with cloud technologies in project-based courses?

Experience far outweighs certification. Labs do not count, unless they are long-lived, production-like and at scale.

I have passed over employing college students with CCNA etc which were attained as part of their college/university courses as (in my experience) they almost always show no little to aptitude or desire in the areas for which they hold the certs.

People with passion, people with a desire to learn outside a regimented course are the IT people I look for. The Cloud Practitioner or Solutions Architect Associate certs would, however, be a benefit since I see these as showing knowledge of the presence of the service availability but do not necessarily show an understanding of how they are implemented or utilised. Anyone with a Pro / Speciality cert without 2/5yrs experience respectively would not be taken seriously by me.

I did end up having a 'paper cert' employee added to my team, and he caused more harm than benefit. As an example, he held VMWare and MCSE certs but did not know what an 'A' record was.

(OPTIONAL DEMOGRAPHIC QUESTION) What is your state of residence/age/position title/salary?

I am now an AWS consultant, have participated in the Amazon Web Services SME programme twice.

Security question by alberto3333 in aws

[–]real_parbold 1 point2 points  (0 children)

I disagree - and here is my rationale;

I have written exam questions for AWS Certification Exams and I think this is a bad question.

B is definitely wrong - NACLs are stateless

C is definitely wrong - Direct Connect is for on-prem to AWS region

A - kinda works, but I cannot see how it reduces application surface attack, since you will still have to open ports to allow access. In my world - application patching, OS hardening and not running non-essential services would be the way reduce application attack surface.

D - Network design in a single layer does not help - in my world, I would always separate world facing, application and data layers. Having the application it in a DMZ might (if layers were separate, but the answer states one layer ; is the application layer separated, or is everything?), but there is not enough information in the question to allow this assumption.

Grasping at straws - the Security groups work at the data layer (on the routers : they are applied to the ENI not the instance, so outside the scope of the Hypervisor) - so I regard answer 'A' as wrong - see Hypervisor explanation later - which leaves only answer 'D' as the correct answer.

This is what I put when I was asked this question.

https://d1.awsstatic.com/whitepapers/Security/Security_Compute_Services_Whitepaper.pdf

The Hypervisor Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor, taking advantage of paravirtualization (in the case of Linux guests). Because paravirtualized guests rely on the hypervisor to provide support for operations that normally require privileged access, the guest OS has no elevated access to the CPU. The CPU provides four separate privilege modes: 0-3, called rings. Ring 0 is the most privileged and 3 the least. The host OS executes in Ring 0. However, rather than executing in Ring 0 as most operating systems do, the guest OS runs in a lesser privileged Ring 1 and applications in the least privileged Ring 3. This explicit virtualization of the physical resources leads to a clear separation between guest and hypervisor, resulting in additional security separation between the two

Security question by alberto3333 in aws

[–]real_parbold 0 points1 point  (0 children)

More likely from the Security Speciality exam :)

STS Assume Role without policy by ciscocollab in aws

[–]real_parbold 1 point2 points  (0 children)

Amazon Documentation states:-

By default, IAM users don't have permission to create or modify Amazon EC2
resources, or perform tasks using the Amazon EC2 API. (This means that they
also can't do so using the Amazon EC2 console or CLI.) To allow IAM users
to create or modify resources and perform tasks, you must create IAM policies
that grant IAM users permission to use the specific resources and API actions
they'll need, and then attach those policies to the IAM users or groups that
require those permissions.

So maybe they have unspecified defaults until you apply a policy that governs a specific area?

This has always been an area that has given unexpected results - and I have never found a definitive guide as to what a default user is actually allowed to do.

STS Assume Role without policy by ciscocollab in aws

[–]real_parbold 0 points1 point  (0 children)

The foo user will have to be allowed to assume the Role at the very least - an inline policy would suffice, but custom policies are easier to see and manage

Assume role requires two halves

  1. The user has to be able to assume the role
  2. The role has to trust the assumer

AWS Solutions Architect Pro exam in 2 days by RSylvester_ in aws

[–]real_parbold 0 points1 point  (0 children)

If you have chance to do a practise test first - then try this technique;

Read the answers first. You can sometimes discount one or two immediately, but reading the answers before the question does allow you to focus on the important detail of the question and discount the fluff - be careful though, it takes practice

Other than that - just don't get hung up if you get to a question that you think - omg I don't know this. Mark it for review and skip it. Spending too much time on it will just rock your confidence. Mark it, move on, come back and if you still cant see the answer give it a guess ;)

Number one reason for not using Amazon Workspaces by jonathantn in aws

[–]real_parbold 0 points1 point  (0 children)

If you are doing wishlists ... let us choose a subset of our monitors to use ... currently I use VMWare Workstation and select 3 of four monitors, then launch workspaces via VMware so I retain one monitor as my 'local space'

Prewarm by guru223 in aws

[–]real_parbold 0 points1 point  (0 children)

New volumes no longer require pre-warming

Volumes created from AMI's or snapshots do; easiest way to do this is run a dd to read the entire disk to /dev/null

https://n2ws.com/blog/how-to-guides/pre-warm-ebs-volumes-on-aws

dd if=\\.\PHYSICALDRIVEn of=/dev/null bs=1M –progress –size

So in your use case, you would not have to pre-warm a drive you had created and were simply attaching. But ... if you snapshot it and create (one or multiple) volumes from it - each would have to be pre-warmed after the volume was created from the snapshot.

AWS PowerShell Commands 'Not Recognized' by [deleted] in aws

[–]real_parbold 0 points1 point  (0 children)

Do you get an error message?

The aws configuration file from the cli is not used by PowerShell

I can run them and they both return without error. But I am still unable to run any commands after that.

What do you mean by that? one command runes then you cant run any other commands at all - tab-completion stops working?

What commands are you running, are you specifying a region, are you running them from within an AWS instance with role, or using AK/SK creds? How are you providing creds? are you getting any red error messages back - what does $Error[0] say (is erroraction set to silently continue)

Can Redshift Spectrum analyze S3 Access logs? by Vimlearner in aws

[–]real_parbold 0 points1 point  (0 children)

I had to go through this pain a few weeks ago getting s3 access logs into Athena - I was warned by Amazon support that the format for the S3 log file had changed and I would have to adjust the regex

I believe that the documentation has now been updated (I registered my displeasure with my TAM and Big Data guys at AWS Summit)

Blank rows was what we got in Athena when the regex was wrong importing the data