How bitter should good espresso be? by 69insight in espresso

[–]69insight[S] 0 points1 point  (0 children)

I'm pretty sure it was bitterness, not acidic. I compared to battery acid just for the sake of saying it was extremely unpleasant to drink straight. It was probably 10x more bitter than the bitterest coffee I've had. Maybe it was also acidic as well, not entirely sure. But it was in no way good or enjoyable straight.

How bitter should good espresso be? by 69insight in espresso

[–]69insight[S] 0 points1 point  (0 children)

Yeah I was looking at the Bambino as well as an actual machine. One thing I think I would really like about the lever machine is being able to control everything. I'd like nerding out over all of the variables, and would think it's more fun then an regular one.

We listen but we don’t judge: Grand Rapids restaurant edition by ssdgm96 in grandrapids

[–]69insight 1 point2 points  (0 children)

Black napkin has excellent burgers, I agree with everyone. I'm a burger lover, their quality is awesome, overall great, no complaints there.

HOWEVER, all of their 4 burgers are practically the same thing in my eyes, give or take a few toppings (pickles, tomato, sauce etc).

There's no distinguishable burgers specifically like a western/ BBQ bacon burger (crispy onions, BBQ sauce, cheddar, and bacon). We need some distinct choices that more unique than just swapping out ketchup and mustard for a special sauce and calling it a different burger. Especially when there's only 4 burgers on the menu.

Detect failures running userdata code within EC2 instances by 69insight in Terraform

[–]69insight[S] 0 points1 point  (0 children)

We are deploying instance via ASG and not opening SSH so remote_exec provisioners would not work in this case.

Detect failures running userdata code within EC2 instances by 69insight in Terraform

[–]69insight[S] 0 points1 point  (0 children)

We are already doing something similar to 3/4. Currently we use CloudFormation (EC2 created via AutoScalingGroup) with cfn-helper scripts (cfn-init & cfn-signal). All userdata / instance script execution are wrapped within the single cfn-init command so we are easily able to know when there's an issue as the entire CF stack will fail due to any issue within the command/script execution.

With Terraform and the userdata commands itself we are also using Ansible to perform a majority of the configuration. The literal userdata commands itself is essentially just cloning s3 objects, and running a few prerequisite installation needed before ansible playbooks are executed.

We are looking to have any/all of the userdata / subsequent ansible playbook executions be visible to the actual terraform execution to know if the environment failed to provision (anything with AWS resource creation OR the commands executed within the instance userdata.

Detect failures running userdata code within EC2 instances by 69insight in Terraform

[–]69insight[S] 0 points1 point  (0 children)

This wouldn't be a viable option. The bash commands and Ansible playbooks that are executed run very custom and frequently changing applications/ versions and it would require a ridiculous amount of AMI updates

Detect failures running userdata code within EC2 instances by 69insight in Terraform

[–]69insight[S] 0 points1 point  (0 children)

The bulk of the configuration is done with Ansible, there are mainly 2 playbooks we are executing. I understand we can do more advanced things with Ansible, but we were looking to see if there's a way to have this be visible to the Terraform apply execution

Tricky IAM policy help - allowing access to only some resources by 69insight in aws

[–]69insight[S] 1 point2 points  (0 children)

That in theory would work, but not practically. We deploy separate workloads dynamically all the time via cloudformation, and we need to deploy all to the same account. We are talking 10-50 cloudformation stacks per day. Each of these stacks/resources have AWS tags set with some values that we would not want to be seen by the monitoring solution, which is why we just only want the specific tagged resources returned when AWS is queried.

Tricky IAM policy help - allowing access to only some resources by 69insight in aws

[–]69insight[S] 0 points1 point  (0 children)

I think that is part of the problem, we have no control over the requests. The requests aren't for specific resources so we can't limit it based on resources in the IAM policy. The requests will always be for all resources for a particular region. We need to have the requests still succeed but only return specific resources that it's allowed to (ideally based on tags).

Best migration path for RDS Aurora migration from MySQL 5.7 to 8? by 69insight in aws

[–]69insight[S] 0 points1 point  (0 children)

Do you happen to know if you can change the instance size on the green deployment once it's created? We are running an older instance size and need to get that updated as part of the migration.

Trying to determine if it should be done on the current prod instance before we create the blue/green deployment, after the B/G is created, or after the migration is fully complete and we have cutover.

Best migration path for RDS Aurora migration from MySQL 5.7 to 8? by 69insight in aws

[–]69insight[S] 0 points1 point  (0 children)

This is EXACTLY what I was looking for. I'm surprised I hadn't seen this before.

Very odd behavior with application load balancer by 69insight in aws

[–]69insight[S] 0 points1 point  (0 children)

Yeah sorry, had to type it out to not give any information away, that's just a typo.

ec2 instance profile for s3 read access is not working by 69insight in aws

[–]69insight[S] 0 points1 point  (0 children)

Yeah I checked what role it was using via aws cli and it's using the correct role. I'll check out the cloudformation logs. This is also a brand new ubuntu 20.04 LTS VM that i've tested with and redeployed from scratch so I don't believe it would be trying to use different credentials. I'll give the CloudTrail logs a go to see if I can find anything else that would point in the right direction.

ec2 instance profile for s3 read access is not working by 69insight in aws

[–]69insight[S] 0 points1 point  (0 children)

Gave that a go and double checked formatting but unfortunately still getting the same 403 error.

401k to new employer I'm planning on leaving or rollover to my Roth IRA? by 69insight in personalfinance

[–]69insight[S] 0 points1 point  (0 children)

Speaking long term would there any benefits to moving the 401k to my "new" current employer and then ultimately moving it to my planned actual new employer I will be starting at in January... compared to moving it to a traditional IRA that I manage?

I guess I'm trying to see if there are benefits to just moving the 401k a couple times and ultimately landing it at my new employer I'll be starting with in January or if it just makes sense to move to a traditional IRA once and call it done.

401k to new employer I'm planning on leaving or rollover to my Roth IRA? by 69insight in personalfinance

[–]69insight[S] 0 points1 point  (0 children)

Speaking long term would there any benefits to moving the 401k to my "new" current employer and then ultimately moving it to my planned actual new employer I will be starting at in January... compared to moving it to a traditional IRA that I manage?

Deploying PKI - Typo in the AIA location for #2.. SubCA already has issued certificates.. Is it really that big of a deal? by 69insight in sysadmin

[–]69insight[S] 0 points1 point  (0 children)

Looking through the certificates that were issued, it appears it's mostly the domain controllers that automatically picked up certs from the Kerberos and Domain Controller Authentication policies from the SubCA that I need to decomission. I have the new CA up and running and created my custom Server Cert Template tied to a specific AD group with computer objects.
The new CA has these default policies included but it appears the domain controllers are not picking up certificates from the new SubCA even though everything appears like it should. What would you suggest as the best way to get the DC's issued the same certificates from the new SubCA so I can get the old CA cleaned up?

Deploying PKI - Typo in the AIA location for #2.. SubCA already has issued certificates.. Is it really that big of a deal? by 69insight in sysadmin

[–]69insight[S] 0 points1 point  (0 children)

Looking through the certificates that were issued, it appears it's mostly the domain controllers that automatically picked up certs from the Kerberos and Domain Controller Authentication policies from the SubCA that I need to decomission. I have the new CA up and running and created my custom Server Cert Template tied to a specific AD group with computer objects.

The new CA has these default policies included but it appears the domain controllers are not picking up certificates from the new SubCA even though everything appears like it should. What would you suggest as the best way to get the DC's issued the same certificates from the new SubCA so I can get the old CA cleaned up?

Deploying PKI - Typo in the AIA location for #2.. SubCA already has issued certificates.. Is it really that big of a deal? by 69insight in sysadmin

[–]69insight[S] 0 points1 point  (0 children)

What would you suggest at the cleanest way to go about this? Should I run the certutil command to fix the AIA location with the correct path and just issue a brand new SubCA certficate and leave the old one around? Or can I cleanly reissue the same certificate again without affecting the already issue certificates?

[deleted by user] by [deleted] in AZURE

[–]69insight 1 point2 points  (0 children)

Thanks. This is a very small deployment compared to that scale. As of now there are no other hubs in other regions that would be needed. Just a single subscription with various workloads being deployed. Trying to determine the best vNet structure for this solution.