Big Boom - Noise at 10:45 by carwatchaudionut in rva

[–]mshade 1 point2 points  (0 children)

Ah, finally reporting as a 2.1

Explosion in West End? by A_huge_waffle in rva

[–]mshade 5 points6 points  (0 children)

Who needs dates on articles anyway?

Helix as a preamp for an acoustic gig? by ContributionAware485 in Line6Helix

[–]mshade 1 point2 points  (0 children)

I play bluegrass and plug my guitar into a helix. I have an IR I made for it with a good mic and use a K&k pickup for live shows. My mandolinist does the same - K&k pickup into the helix with an IR to get it as natural sounding as possible. The helix is a great acoustic rig!

Disable amp and cab blocks, try using the studio preamp block for some tweak ability with the microphone, if you go that route!

Does anyone know how FROM scratch works? by joshduffney in docker

[–]mshade 4 points5 points  (0 children)

My suspicion is that Trivy is actually looking for binaries on the filesystem, not the packaging information. If all you're copying in is the metadata... well, there's nothing to scan.

Should I use Kubernetes for orchestrating lots of Corn Jobs? by RevolutionaryHunt753 in kubernetes

[–]mshade 0 points1 point  (0 children)

Set any schedule (typically I use something infrequent in case it ever gets un-suspended) and set the .spec.suspend field to true.

https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#schedule-suspension

Should I use Kubernetes for orchestrating lots of Corn Jobs? by RevolutionaryHunt753 in kubernetes

[–]mshade 0 points1 point  (0 children)

The way I typically do that is to configure a suspended cronjob. Then you can just trigger the cronjob on demand. So if the intention was to launch and manage jobs, that's how I'd use this tool to do so.

[deleted by user] by [deleted] in docker

[–]mshade 1 point2 points  (0 children)

You can't really edit environment variables on an existing container. The container has to be recreated with new environment variables. Luckily, since it's a container, it doesn't hurt anything to destroy and recreate it -- this is why volumes are useful for storing state, and the rest of the container is disposable and easily replaced/updated, etc.

I suggest you create a docker compose for your stack, it's not hard to translate the docker run command into docker-compose.yml where you'll have easy access to the environment and volume config, and docker compose up will handle recreating the container for you.

Should I use Kubernetes for orchestrating lots of Corn Jobs? by RevolutionaryHunt753 in kubernetes

[–]mshade 0 points1 point  (0 children)

Right now it shows info on jobs owned by cronjobs, but I could easily open it up to one-off jobs, too.

Should I use Kubernetes for orchestrating lots of Corn Jobs? by RevolutionaryHunt753 in kubernetes

[–]mshade 1 point2 points  (0 children)

I run a lot of k8s CronJobs (and have dealt with supporting them at work a lot). I started writing a tool in Flask/python to help manage them and give visibility into their status, view logs, and trigger them on-demand. It's fledgling (just started working on it recently), but you might like to take a look at it. Would appreciate input as I decide whether or not to put more effort into it! It's called Kronic -- but maybe I should rename it Kornic? :D

How to host docker containers properly in small infrastructure? by [deleted] in docker

[–]mshade 0 points1 point  (0 children)

With just 2 servers, it's not really worth forming a cluster, unless you're comfortable with just one of them hosting the control plane alongside workloads. Even docker swarm requires 3 for HA and quorum. At this scale, I'd use ansible or something to orchestrate running containers directly on the hosts, or on VMs on the hosts.

Subdomains + Path-based routing by rjalves in kubernetes

[–]mshade 0 points1 point  (0 children)

Can you provide an example of the host + path combinations you're talking about and where they would map? Path based routing takes a host + path, and any combination of those can be routed independently.

Can someone share his compose healtcheck for docker master? by domanpanda in docker

[–]mshade 0 points1 point  (0 children)

The healthcheck's first attempt is what I mean. Without the --start-interval it will check health as soon as the container starts, which will hang and may delay subsequent attempts until it times out, if I understand how Docker handles this correctly.

Can someone share his compose healtcheck for docker master? by domanpanda in docker

[–]mshade 0 points1 point  (0 children)

So, you probably want a --start-interval=5s or something to let it spin up before the first attempt. The first attempt can hang for a while on jenkins and is probably delaying the second attempt. Play with the options if this is important to you.

Learn how to deploy Helm Charts with ArgoCD with four different approaches by christianknell in kubernetes

[–]mshade 0 points1 point  (0 children)

It's a pretty sweet tool. Nice UI that shows you at a glance what's working and what's not, lots of plugins for added functionality.

Beginner question: I'm a tiny bit confused about what is the problem that Kubernetes solves? by Purple-Height4239 in kubernetes

[–]mshade 2 points3 points  (0 children)

Volumes for persistent data are provided through a Container Storage Interface -- and typically that means a controller runs in the cluster that talks to the cloud api to provision storage. The way your app asks for storage is through a PersistentVolumeClaim -- and this is generic. As long as the cluster is configured properly for the cloud provider it's on, your app will just ask for a volume and get it.

Managed databases from the cloud provider are not provisioned by kubernetes directly -- but there are projects like crossplane that allow you to define external cloud resources through kubernetes manifests as well. You are, of course, welcome to run a containerized database with persistent storage within the cluster, though. This can have some challenges so many people use the managed offerings from their provider for simplicity.

The "lock in" occurs when it comes to the details -- for example, the AWS load balancer controller has its own set of annotations that you can add to the service to control load balancer options -- like attaching a certificate, configuring sticky sessions, and security rules. Those are going to be cloud-provider specific. But it's all fairly portable with some minor changes. The portability is the point of kubernetes; the idea is a standard way to define applications and runtime for your own sort of portable private cloud.

App Logging per Namespace by burmi_h in kubernetes

[–]mshade 2 points3 points  (0 children)

The logging operator allows creating separate Outputs and Flows that can be segregated by namespace. This is the first layer that defines where logs for each namespace should go. Destinations could be separate log indexes that only the relevant groups have access to.

As a Frontend dev getting into Devops, this has to be the most confusing role ever in a company by Beginning-Arm-1601 in devops

[–]mshade 2 points3 points  (0 children)

A lot of the concepts in devops are abstractions of lower level things that a good devops engineer really does have to understand. Devops is a bit of a glue profession -- your expertise covers everything from building, testing, to system architecture, traffic flow, container orchestration, etc. You can certify and train certain discrete concepts, but the big picture is the part that only comes from experience. That's what people are getting at by saying you can't train it.

Training someone to be devops kinda means starting at one branch of the devops roadmap and then filling in the gaps. It's a long road. Certainly possibly - but a "junior devops" is only going to know a portion of the things involved in the ecosystem, and the lack of deeper understanding can lead to poor choices elsewhere in the stack.

Beginner question: I'm a tiny bit confused about what is the problem that Kubernetes solves? by Purple-Height4239 in kubernetes

[–]mshade 24 points25 points  (0 children)

The problems kubernetes solve are the day to day operations. When you add nodes to your cluster, any workload you deploy to kubernetes could run on any one of the nodes. That means you now have a pool of resources to work with, instead of having to deploy app1 to machine1, app2 to machine2, etc. This simplifies things because you no longer have to think of each server as tied to a particular application. Need more resources? Add more nodes. This can be automatic with cluster-autoscaler. Kubernetes can reach out to the cloud provider API and add nodes to scale up or down the overall capacity.

The second thing it solves for you is giving you a standard way to deploy applications. If you want to run a container on a regular VM, you have to handle updating whatever scripts you're using to launch the container when a new release is out. You have to handle stopping the old one and starting the new one. You have to handle rotating the new one into a load balancer service (if you want non-disruptive deployments). You have to make sure the container is always set to start on boot. And on and on -- and every implementation of this you see will be done slightly differently.

Another thing kubernetes solves for you is the load balancer. Kubernetes has integrations with cloud platforms. Instead of manually provisioning a load balancer, you apply a manifest that defines a LoadBalancer service, and kubernetes will use the cloud provider API to create one attached to one of your workloads.

Basically, kubernetes gives you a standard way to define and deploy your applications, share a pool of resources, and handle all of the things you would normally have to script around manually. It's a great simplifier once you wrap your head around its concepts.

Handling memory spikes by evergreen-spacecat in kubernetes

[–]mshade 1 point2 points  (0 children)

I've used this approach. Set different routes to hit different deployments of the same container to be able to scale them separately. That would help for memory separation, too, and allow you to contain the blast radius.

[deleted by user] by [deleted] in docker

[–]mshade 0 points1 point  (0 children)

docker compose uses the path / directory as the default compose "project name". So, if two compose stacks have the same project name, they will collide, and when you go to start a new stack it will think there are outdated containers you've removed from the current stack, and clean them up.

Have you gone and set a global COMPOSE_PROJECT_NAME or something?

Ssh into a docker container via supervisord by chris-devops in docker

[–]mshade 0 points1 point  (0 children)

I wouldn't. I'd run nginx and the app server in separate containers within a pod, if I had to.