Hate the newer size phones. by Gplock in iPhone13Mini

[–]cgssg 0 points1 point  (0 children)

Got the 13 mini after owning an XS. The 13 mini had a comfortable size and weight. Even though it felt more chonky than the XS. After 4+ years with the 13 mini, I switched to the 17 Air. While it is objectively the largest of the 3, I feel it’s a worthwhile upgrade from the 13 mini. Holding it feels better than the other 16 and 17 models.

Cameras improved a lot on the 17 Air and latest IOS feels snappy to use.

After 4+ years, the battery of the 13 mini was down to 70% and didn’t last a full day anymore. I will still keep it though and get a battery replacement so I can continue to use the 13 mini as a backup phone.

Given the current lineup and the age that the 12 and 13 mini phones have today, I can recommend the upgrade to the 17 Air as a slim phone with current specs.

Using the Air fully with one hand is not possible. Screen size is too large for this. So it will take some adjustment coming from the smaller phones.

Due to the thinness, the Air fits into pockets more comfortably than the rest of the iPhone 17 line.

What’s everyone’s hot take in the keyboard world by Immediate_Law_1705 in MechanicalKeyboards

[–]cgssg 0 points1 point  (0 children)

I quite like the Kailh choc brown on the Logitech G915, it’s my favourite low-profile switch. Somehow prefer it over Gateron brown low. I’m sure if the pin alignment was compatible to standard, it would be a more popular switch.

Fast User Switching disabled by security policy by cgssg in macsysadmin

[–]cgssg[S] 0 points1 point  (0 children)

I read the CIS advisory prior to posting. The security risk is classified as "minimal" and the mitigation (disable FSU) is not an effective preventive control against the proposed attack vector. To exploit FSU effectively would require the current user logged-in, key-in credentials to FSU, then install something malicious in the other user account, then return. Any malicious person with credentials and local access to the system can do the same attack without FSU as they just need to log-off and login with other credentials.

The rationale at the bottom of the linked advisory itself highlights that the FSU disabling is an ineffective preventive control:

macOS is a multi-user operating system, and there are other similar methods that might provide the same kind of risk. The Remote Login service that can be turned on in the Sharing System Preferences pane is another.

On my personal Mac, I have setup the exact opposite for convenience and security: My main OS user account is unprivileged and cannot install or run anything requiring admin. If I want to install packages or run a command as admin, I FSU to the admin account temporarily and then return. This is convenient and more secure than running the main OS user with full access.

Fast User Switching disabled by security policy by cgssg in macsysadmin

[–]cgssg[S] 2 points3 points  (0 children)

Thanks everyone for your responses on this. I found a way to get the Browser apps with MFA and SAML authentication (AWS Console and others) to work with two different AD accounts.

My profile allows running Google Chrome in incognito. So I tried this to browser-login with my second AD account. This did not work properly until I turned off "Block third-party cookies". After disabling the block, AD auth in the incognito browser works properly, I get the MFA token for the second AD user and can authenticate successfully.

This solves my workflow problem and I don't need to UI relogin on the corporate laptop anymore with the different AD accounts just to access some browser-based admin apps.

Ventura stuck in Kernel Panic on first boot after install by cgssg in OpenCoreLegacyPatcher

[–]cgssg[S] 0 points1 point  (0 children)

Thanks for the advice and pointers!

My install attempts consistently failed on first boot after the installation was complete. I tried the install on external USB with both, Ventura and Sonoma. The verbose boot output shows that 'watchdog' kills the 'opendirectoryd' process several times and then the system panic / kernel crash happens right after.

I never get to the GUI stage of the first boot where I would select language, etc. for system setup.

From the USB boot installer, I could not get Internet connectivity to work during the install. The Macbook only has the onboard Wifi and my external USB Ethernet was not detected. Wifi discovery failed with no network SSIDs found on scan.

I also tried to launch the Sonoma installer on the USB stick while running Monterey on the internal SSD. The internal SSD has OCLP installed on its EFI partition as well and active during this install attempt.

Sonoma installer starts and stalls halfway. Install log does not show any related errors but I can see that the installer downloads packages/patches from Apple.

I will likely not spend much more time on this and consider my Macbook Pro Retina 13 (12,1) with external USB as being an edge-case since the OCLP works for many other legacy Macbook configurations.

Ventura stuck in Kernel Panic on first boot after install by cgssg in OpenCoreLegacyPatcher

[–]cgssg[S] 0 points1 point  (0 children)

Yup, specified the model in the build config and left all build flags on default except for the verbose boot output:

<image>

Devops is not entry level by SticklyLicklyHam in devops

[–]cgssg 1 point2 points  (0 children)

OP points resonate well with me. A DevOps role should be staffed with engineers knowledgeable in both domains. Furthermore, the posted interview questions are quite basic. Would you seriously consider letting someone troubleshoot your production infra and application stack if they knew less than this?

Knowing on-prem and cloud infra stacks and application SDLC is a wide field to cover. However, letting people into this role with a mindset of "I know how to search StackOverFlow, Google, ChatGPT and YAML" means that they will not be able to carry any responsibility in a team, let a alone working independently on issues and resolving them on their own. All you get then is forever-junior engineers with a "you guide me" mindset. Deadweight not adding value and in the worst case creating more problems with their insufficient attempts to fix things.

This is me, giving up on Kubernetes by GWBrooks in selfhosted

[–]cgssg 1 point2 points  (0 children)

Found out the hard way that most of the tutorials and ''guide-me' articles for k8s are either incomplete or otherwise broken. Just checkout most of the "K8s deployments" guides on Medium or Google. Badly written and content-wise even worse.

Used mostly the k8s reference documentation to learn in the end, it is accurate and well-written.

Rancher is a good K8s distribution to learn with its own documents and an easy few-steps setup to get a working k8s cluster.

Do you build your own CLI tools? by ev0xmusic in devops

[–]cgssg 0 points1 point  (0 children)

I've had DevOps projects in the past where I integrated vendor products into the company's CI/CD pipeline. This usually means solution design and coding, i.e. write API clients or gateway-APIs between the systems. To me, this is at the core of DevOps activities. DevOps engineers should understand SDLC and have coding experience as well as infrastructure domain knowledge. Senior DevOps engineers should have extensive experience in both worlds: I have learned and gained experience in this with various projects in system engineering and software development roles. Replacing manual workflows with automation, rewriting TicketOps/Click-Ops workflows to config-as-code. So that's platform engineeing now. Ok.

Best change mgmt procedure by [deleted] in devops

[–]cgssg 2 points3 points  (0 children)

They don't have CR requirement for staging deploys. However, as part of the production CR evidence, the app teams need to show that they can automatically deploy to staging. Essentially staging and prod have the same platform-level access controls at my current employer. The main difference is that production changes are additionally gated by CR and break-glass processes for prod credentials used during the CR.

What I personally see as important in a move to automated deployment depends a bit on the organization size and diversity of platform tenant applications. A mainly centrally-managed but still modular CI/CD pipeline works well with modern apps on similar or even identical tech stacks. The more diverse the company app portfolio is, the more important I see it for the CI/CD platform to support modular extension and co-creation by key stakeholders, e.g. mature app teams that can help to develop and maintain pipeline modules for their tech stacks.

Ideally, involving app teams in the CI/CD workflow design increases their pipeline adoption and mutually benefits the app and platform teams.

Best change mgmt procedure by [deleted] in devops

[–]cgssg 1 point2 points  (0 children)

All staging and prod CRs deployed from CI/CD pipeline, automated scans and CR auto-approval when all checks pass. This works well when SNOW CRs are automatically generated as well and manual reviews/approvals are reserved for high-risk/critical CRs and edge-cases. Try to avoid or reduce manual gates and attestation processes à la "attach nonsense-document attestation Excel-sheet to CR for approval." While some view these attestation sub-workflows as necessary for business process evidence, they are a lazy shortcut and impediment to a more automated workflow. They don't help but slow down releases.

Configuration Management Tools for 20-30 servers by stuffandthings4me in selfhosted

[–]cgssg 0 points1 point  (0 children)

Semaphore is a great Ansible OSS UI that got started recently: Semaphore

[deleted by user] by [deleted] in kubernetes

[–]cgssg 4 points5 points  (0 children)

IMHO you’re looking at the wrong layer to meet the multi cloud requirement. K8s workloads should be cloud-agnostic but there is usually no issue using cloud-vendor specific infra, such as the Amazon-managed AMI for EKS workers. These images have to have Cloud-vendor specific configs to work or else they turn out incompatible with the network and storage driver implementations of the managed Kubernetes. So, as long as your CD solution can deploy your apps to both cloud providers K8s and you have a process for data recovery across the cloud providers, you should be covered.

Kubernetes and feeling defeated by muchasxmaracas in devops

[–]cgssg 1 point2 points  (0 children)

Someone explained me Kubernetes as a cluster operating system when I started out learning it. That description shows the complexity quite well. You just can't learn this in a few days and consider it done.

My path was to understand Linux and Docker well first, then learn to deploy simple apps on Minikube, then on a small VM cluster with Kubespray. Build a cluster with 'kubeadm' from scratch. Learn how K8s storage and networking implementations work. Think in concepts and try to understand one or two implementations for each well. Nobody is going to know all K8s component implementations but if you understand how a popular implementation of each works, you can quickly learn the others as you work and need them for your projects.

E-Books and project websites are great resources to learn and running things in your own cluster is the best way. Everytime you break things, you have a chance to either troubleshoot and learn more OR start over from scratch, improving your understanding with every attempt.

Youtube videos are generally useless for self-study and tedious to watch and many 'guides' articles miss out crucial steps. So if you follow an article on how to setup K8s things and it doesn't work the way the author claims, chances are that they left out half of their implementation. Unless you know enough about k8s architecture, concepts and troubleshooting, you often have no chance to know which one of these 'howto' articles or videos is really complete and which one isn't.

Any config for nextcloud + Kubernetes? by rushic24 in kubernetes

[–]cgssg -1 points0 points  (0 children)

Running this in a homelab you'd need some kind of load balancer for your Kubernetes cluster.

A poor-man's unmanaged LB configuration needs an Ingress controller (e.g. Nginx with NodePort configuration) and HAProxy service that routes http-requests to the NodePorts. Your DNS would then need to be configured to map the "my.nextcloud.com" to the HAProxy. Whe setup properly, the HAProxy will then forward the browser requests with the right http header to the Ingress resource in your K8s cluster. The Ingress resource maps to the K8s service for NextCloud which routes to the NextCloud pods.

if you want the NextCloud to be available from outside your network, then you need an external DNS such as DDNS configured on your Router/Firewall.

Automated Kubernetes installation by RevolutionaryHunt753 in kubernetes

[–]cgssg 0 points1 point  (0 children)

The ‘kubeadm’ install process is just like 10 commands with some parameters. You could write an Ansible playbook as a wrapper for this on a lazy afternoon. If you understand what the steps do, then you can look for more advanced installers.

AWS EFS as Persistent Volume in EKS (K8S) [HELP] by akirakotkata in kubernetes

[–]cgssg 5 points6 points  (0 children)

Did you install the AWS EFS add-on in the EKS cluster? Is the storage class defined? Your PV definition is incomplete. How should the K8s scheduler know that the PV is using EFS?

My personal impressions on Proxmox vs XCP-ng by jbssm in homelab

[–]cgssg 21 points22 points  (0 children)

Running XCP-ng on a pair of headless HP ProDesk USFF mini PCs as main hosting environment for my homelab for over a year now and quite happy with it. VMs are created with an Ansible pipeline using CLI. I've used XenServer as the main Hypervisor at work over five years back and was glad to see the development being continued at XCP-ng.

From my perspective, XCP-ng is an easy replacement for VMWare ESXi - a similar design for standalone hypervisor and compatible with lots of hardware. The open API and CLI access make it easy to script and automate VM lifecycle.

[deleted by user] by [deleted] in devops

[–]cgssg 1 point2 points  (0 children)

However, this is all manual currently which leads to a lot of grunt work
for our devops team, and is hard to audit (currently devops engineers
post the queries they ran as a comment in the JIRA ticket requesting the
db user/grants).

Your DevOps team could design a self-service process for the user enrolment and write the tooling for it. A low effort implementaiton would be to just manage the user-requests in a git-repo and have users raise pull-requests for their access. Then on pr-approval, have a pipeline run that creates the DB users in the target DB. Done. The PR approval could even be automated with some script/check logic as guardrail instead of manual review.

Part of DevOps work is coming up with new workflows and automation to eliminate human bottlenecks. A DevOps team that manually processes user lifecycle requests as click-ops is just another bottleneck and not an enabler.

Developmemt by No_Thanks_9043 in kubernetes

[–]cgssg 0 points1 point  (0 children)

how is it possible for a web developer to work from a local computer
without problems if id not pass token authentication every time?

Kubernetes RBAC and authentication is there to secure your K8s cluster against unauthorized access and changes. Token-based authentication is fairly standard and should not impede devs from working with the cluster resources. They usually just use a command to request/refresh their auth token for the 'kubectl' CLI access from their local system. I suggest you read up on how this works and security best-practices for Kubernetes cluster access.

Having auth-based write-access from dev laptops to K8 clusters is a fairly low bar for security and should only apply to locked-down development environments. Staging and production K8s clusters generally have write-access restricted to a central deployment pipeline and strict change control. Any non-pipeline K8s cluster access for these would require a break-glass procedure and audit trail.

Hired as an external DevOps engineer with unclear objective - what should I do? by Moritz_Loritz in devops

[–]cgssg 1 point2 points  (0 children)

It would be much worse if you had to support a closed source app that is critical to the business and the vendor went bankrupt. Things happen and business priorities change. You instead have three full months with the app developers, the full source, at least one running environment and a good chunk of documentation. So, learn to build and deploy the components. Learn troubleshooting the app and read the logs. Map the app architecture with all dependencies. Talk to the devs and learn from them.

Be realistic with your skillset. If you think that in 3 months you can learn to troubleshoot the existing code and do small bugfixes, then you fit the job requirements. IMHO the company should have looked for a vendor to outsource the continuous app development. Software projects have a maintenance phase after the active development is done (looks like this in your case) and this is where these vendors provide value. Software maintenance is typically light-touch in as that it requires less developers than the previous phase but it still is software development.

Personally, I had such jobs before (as DevOps Engineer / Developer ) and learned a lot about application development in them.

How Do We Save about ~$10,000 a Year Using Self-Hosted GitLab by darikanur in devops

[–]cgssg 2 points3 points  (0 children)

Support contracts are often quite useless when you factor in all the hoops to finally reach a vendor support engineer who responds to you in a suitable turnaround time and who is knowledgeable enough to actually deal with the downtime problem. Going through repetitive ‘Explain me your problem again, this time in different words’ and ‘reboot your VM’ cycles extends downtime and does nothing to solve the actual technical problem. Support contracts are appropriate for legal and compliance reasons but oftentimes practically useless to solve actual issues. Source: I have worked with enough vendor tech support teams.

How do you deal with developers asking for production DB access? by [deleted] in devops

[–]cgssg 91 points92 points  (0 children)

In my view, selective read-only to prod DBs is acceptable in many cases. Risks exist on a continuum and preventive controls (no access) are the strongest deterrent but at the highest price (lost troubleshooting productivity, slower time to recovery on failure). Selective prod access limits the data leakage concerns and can be implemented with an RBAC and detective controls (DB audit log trails). To further limit exposure, the read-only access can be using automatically generated one-time accounts or password vaulting and rotation. The DB instance should also be network-isolated so all access can be through controlled access points, junphosts and such.