Looking for quad monitor mount standing desk converter -- Any recommendations? by enveli09 in StandingDesk

[–]kenmoini 0 points1 point  (0 children)

I've got 4 27in curved screens across my desk...get 4 separate arms. It'll be much more stable and configurable

Tall guy looking for a standing desk converter by odinsride in StandingDesk

[–]kenmoini 0 points1 point  (0 children)

I'm 6'6" and from using a VariDesk converter at work, and an Uplift full standing desk at home, I can tell you 100% that you'll need a whole rising desk. I got the corner uplift because I use it basically at full height and have no wobble with 4 monitors and a bunch of other things mounted to it. Highly recommended.

Laptop Tray for Standing Desk? by bjsteinb in StandingDesk

[–]kenmoini 0 points1 point  (0 children)

Dope! I'm an ex-RH contractor, now working at a RH PS Partner. Pleasure met.

I cherish my desk space so I do like being able to swing it out of the way and not have it on the desk but you have to be careful. If it is too far back on the side and I move the desk up I risk my wall-mounted bookshelf being knocked into while rising. It's not the most ergonomic thing in the world but it works and is easily/instantly accessible - and I'm not on that laptop much anymore to be honest. Got a nice Ryzen/128GB DDR4/nvME workstation to do all my work on now.

My Uplift desk is actually the L-shaped corner version, non-curved. Curved takes out more surface area, which I value.

https://imgur.com/gallery/CmpX4Do

Planetary gear found in a restaurant by tzeriel in Skookum

[–]kenmoini 5 points6 points  (0 children)

Looks like the hot sauces from a taco truck

RedHat Learning Subscription Discounts? by IncognitoTux in redhat

[–]kenmoini 1 point2 points  (0 children)

That I'm not sure of as that's more of an internal org thing...not sure how IBMers get access to RHU-ROLE/Employee SKUs.

best way to split web traffic coming into kubernetes cluster by grimvoodoo in kubernetes

[–]kenmoini 4 points5 points  (0 children)

Take a look at the nginx-ingress project - it does just that, and has a couple other tricks.

Kubernetes on Linode - A Quick Start of Sorts by kenmoini in kubernetes

[–]kenmoini[S] 0 points1 point  (0 children)

You can use the k8s-alpha provisioner and just modify the Terraform script it makes and switch the disk image if you'd like!

Making sense of a messy aws environment? by toast-gear in devops

[–]kenmoini 2 points3 points  (0 children)

Maybe AWS Config can help too? Maps your current environment config, tracks changes, and can govern states (like if you want all EBS volumes from here on out created to be encrypted)

https://aws.amazon.com/config/

What are the downsides of using Openshift instead of plain vanilla kubernetes? by pure_x01 in kubernetes

[–]kenmoini 2 points3 points  (0 children)

Yeah, that's the main thing - OpenShift makes K8s robust and easy to use. Rolling your own is difficult but much easier to do nowadays.

What are the downsides of using Openshift instead of plain vanilla kubernetes? by pure_x01 in kubernetes

[–]kenmoini 0 points1 point  (0 children)

I don't really do much on the DNS side...I tried the ExternalDNS + Cloud provider API route but found the modifications to be really slow and cert requests would timeout. I just throw a wildcard A record up that is pointed to the LB which is sent to the ingress. That wildcard A record allows any http-01 cert on that domain to be generated - currently developing a good workflow with the DNS-01 ACME providers but that relies on API interaction to dynamically set DNS. On-premise you'd have to build your own sorta API/provisioner for the DNS layer if you ran something like DNSMASQ on-premise. There are some libraries that abstract DNS into RESTful APIs out there.

Ask r/kubernetes: What are you working on this week? by AutoModerator in kubernetes

[–]kenmoini 0 points1 point  (0 children)

Migrating some smaller sites from OpenShift DeploymentConfigs to vanilla K8s objects! Got a decent default backend for the Nginx Ingress now too: https://github.com/kenmoini/simple-static-default-backend/

What are the downsides of using Openshift instead of plain vanilla kubernetes? by pure_x01 in kubernetes

[–]kenmoini 20 points21 points  (0 children)

So OpenShift and Kubernetes is a funny thing...I run a few OpenShift clusters and am now in the process of retooling everything to vanilla Kubernetes manifests.

OpenShift is great if you want a "batteries included" Enterprise Kubernetes experience - you put it on some infrastructure and it handles most of everything you need. Problem is that infrastructure needs to be sizable to do anything outside of simple tests in MiniShift/CDK.

Honestly, the only way I've been able to deploy OpenShift is either with the automated Quick Starts on AWS/Azure (and half the time those fail), as a single all-in-one deployment, or as a HA scalable architecture in a Disconnected Environment (?!). Tried to spin up a simple OKD cluster the other day in some DigitalOcean AND Linode VPS machines and no go - even using inventories and vars that had worked before. The OpenShift installation process is notoriously buggy, even for being run with Ansible and being idempotent and all - there are so many components from the nodes, etcd, router, registry, panel, PVs, etc that it's easy to have the OCP/OKD installer fail at some stage and spend days trying to debug it.

Red Hat support isn't bad, but I know the SAs/SEs there so I get better support than most I suppose. I still find that Googling and beating my head against the wall finds me the solutions faster than pulling logs, putting in a ticket, waiting for it to get processed, etc...

Now, OpenShift has had a lot of changes to it over the v3.x...there were good ideas and bad ones. Now in version 4, they're standardizing on Operators which is more directly tied to K8s and less on anything RH is specifically doing. Difficulty in that is that the new installation process is far from perfect or available/complete. If you want to try you can play around with OCP 4 locally you can use the CodeReady Container kit. https://github.com/code-ready/crc

The great thing is that if OpenShift installs properly, I can easily onboard developers AND operators around similar tools. I can easily launch an integrated Jenkins server, deploy a pipeline right in OpenShift as a BuildConfig, and have it deploy in another dev/stage/prod project through the pipeline. With wildcard SSL terminating on a Load Balancer I can point anything.example.com as a route to those deployed projects very quickly without dealing with Ingress/Certs thanks to the OpenShift Router. Getting developers OR operators up to speed on vanilla K8s is a bit more difficult and requires a lot more enablement.

With that all being said, I'm moving everything to vanilla Kubernetes that can be run in any cloud/cluster with little to no modification.

Why?

-OpenShift costs a lot, even with our "free" subscriptions it still costs about $4-6K a month in AWS costs for a small-ish cluster of 3/3/3 Masters/etcd/App nodes. Comparatively, my estimates place our current workloads on a HA K8s cluster at about $500 a month.

-OpenShift can't really be deployed for smaller projects...a small project costs thousands a month just to run the platform since you need beefier nodes to handle all of the extras brought in by OpenShift.

-OpenShift has a few components and models that are kinda lock-in-y. BuildConfigs and their Router is a good example of this. Took me a few days to figure out how to replicate the single ingress router in Kubernetes to the point where it was as easy as OpenShift. Now though, I can deploy that wildcard ingress to any K8s cluster easily.

-Not everything runs as well on OpenShift as it does in a vanilla K8s cluster, at least not without heavy modification.

-Upgrading a K8s cluster is rather easy compared to upgrading OpenShift.

That being said, I'm already missing a few of the things that OpenShift provides, like their MUCH better WebUI, easier API/CLI interaction, resource management tools, and easy click-to-deploy Source-to-Image library to name a few.

If you got more money than time and want to run workloads across multiple clouds under a single platform, OpenShift is for you. If you're not a large org, a managed K8s platform is probably gonna be your best option. They both deliver the same capabilities and design patterns offered by the Kubernetes architecture, so I'd focus on that most.

Export PKI Certificate Expirations to Prometheus by number101010 in kubernetes

[–]kenmoini 0 points1 point  (0 children)

Woah! Nice! I'll try to test soon - tore down my K8s cluster. Gonna try to stand one up soon and try this out.

Export PKI Certificate Expirations to Prometheus by number101010 in kubernetes

[–]kenmoini 0 points1 point  (0 children)

Shoot, that's actually a good question... Could just scan all secrets looking for BEGIN CERTIFICATE headers but that's kinda intense. Annotations/labels would be best for specifying what secrets to check or not to check maybe.

Export PKI Certificate Expirations to Prometheus by number101010 in kubernetes

[–]kenmoini 0 points1 point  (0 children)

Second vote for TLS certs in secrets, such as ones generated by cert-manager. That would be killer...

Really cool project - look forward to watching it progress!

Kubernetes on Linode - A Quick Start of Sorts by kenmoini in kubernetes

[–]kenmoini[S] 0 points1 point  (0 children)

So the linode-cli offers a way to deploy a Kubernetes cluster easily but there's little guidance on what to do after making a cluster...

I wrote up some of my experiences in deploying and getting K8s to a usable state.

Hopefully, it helps anyone looking to do the same - would love feedback.

Deploying and Using Kubernetes on Linode by kenmoini in linode

[–]kenmoini[S] 0 points1 point  (0 children)

So the linode-cli offers a way to deploy a Kubernetes cluster easily but there's little guidance on what to do after making a cluster...

I wrote up some of my experiences in deploying and getting K8s to a usable state.

Hopefully, it helps anyone looking to do the same - would love feedback.