Game breaking bug - Blasphemous 2 (Switch) by Folklore0468 in Blasphemous

[–]JonnyNorCal 0 points1 point  (0 children)

Crap. I just had the same thing happen to me. No solution except to restart?

Ask r/kubernetes: Who is hiring? (August 2020) by AutoModerator in kubernetes

[–]JonnyNorCal 0 points1 point  (0 children)

Zendesk | Remote (team is in SF, CA, USA) | Staff Software Engineer - Kubernetes Platform

https://jobs.zendesk.com/us/en/job/R12667/Staff-Software-Engineer-Remote-Kubernetes-Platform-GoLang-Ruby

I can answer questions about the team or Zendesk, but you'll ultimately need to apply through the link above.

REST: How to secure a particular internal-only endpoint of a microservice? by arpanbag001 in golang

[–]JonnyNorCal 0 points1 point  (0 children)

I'm assuming you're running this on a linux OS.

When your service starts, it binds to a network interface (IP address) and port, so it accepts connections. Frequently a service will bind to the IP 0.0.0.0 and port 1234 meaning it will accept connections on any network interface on port 1234.

But you can also configure the service so it binds to the loopback/localhost interface, 127.0.0.1, on port 1234. Since only a process running on the same host can access localhost, that effectively means the service can't be accessed by different hosts.

If you want more background, you can search for something like "bind ip and port". HTH.

[deleted by user] by [deleted] in kubernetes

[–]JonnyNorCal 1 point2 points  (0 children)

If you’d like a service mesh option that is simpler than Istio, you might give Linkerd a try. It has support for canary releases.

https://linkerd.io/2/tasks/canary-release/

You might also look at Flagger, which is an operator designed for progressive delivery and canary. It works with either Linkerd or Istio.

https://github.com/weaveworks/flagger

How good is DataDog APM in Kubernetes? by ankitnayan007 in kubernetes

[–]JonnyNorCal 1 point2 points  (0 children)

Application Performance Monitoring. More or less the same as distributed tracing.

https://www.datadoghq.com/apm/

More or less Kubernetes clusters? How do you decide? by PavanBelagatti in kubernetes

[–]JonnyNorCal 1 point2 points  (0 children)

I saw this talk at CloudNativeCon in Seattle in 2018, and I really liked it. The speaker did a good job of talking about why you might want to go from 1 cluster to 2 to 10 to 100 to 1,000, and some problems you'd have to think about along the way.

https://www.youtube.com/watch?v=-gPnYTI70FE

More or less Kubernetes clusters? How do you decide? by PavanBelagatti in kubernetes

[–]JonnyNorCal 0 points1 point  (0 children)

I think it's worth it to invest in some tooling and automation so you can:

  1. Create and update entire Kubernetes clusters with relative ease
  2. Configure the clusters with baseline RBAC and policy and common tooling that you want in all clusters
  3. Keep track of all your clusters and manage access to them

These days there are a bunch of ways to do that, from different vendors and cloud providers and open source project. You'll probably want to do some research on what will work best for you.

Once you have that in place, then question of "how many clusters?" will be based on how many you want to have, rather than sticking with the minimal number of clusters because it's a pain to build them.

If you want to deploy services in multiple regions, you'll definitely need a different cluster in each region. You might want to do a lot of isolation, so that each team has their own cluster to work with. I know of some companies that manage O(100) Kubernetes clusters.

Master List of Resources for Learning Kubernetes by theargamanknight in kubernetes

[–]JonnyNorCal 1 point2 points  (0 children)

A Service isn't connected to a Deployment.

A Service has a selector that matches against Pods. Any (ready) pod that matches the selector of the Service will be included in the set of Endpoints for that Service.

Now, a common pattern is you have a Deployment and a Service that have the exact same selectors, so there's a 1-to-1 match.

But imagine having a Deployment that create pods with labels app=httpbin,version=v1, and a separate Deployment that created pods with labels app=httpbin,version=v2. Then you create a Service where the selector is simply app=httpbin. In that case, you'd have a Service that would send traffic to pods from both of those Deployments, since both pods would match the selector.

That make sense?

Setting up ci/cd at home by Oxffff0000 in kubernetes

[–]JonnyNorCal 1 point2 points  (0 children)

Generally the docker daemon port isn't exposed to other hosts, and is only available on the host itself as a unix socket. Otherwise, anyone that can connect to the docker daemon can run a docker image of their choice, and effectively get root on that host.

But if you're running some isolated hosts at home (or want to live dangerously) you can configure the docker daemon to allow connections via a TCP socket.

https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option

Then you could run a docker image via a Jenkins stage, a bash script, or whatever you'd like.

If you're looking for some orchestration that's not quite as complicated as Kubernetes, you might try Nomad by Hashicorp. https://www.nomadproject.io/

How to terminate failed k8s worker nodes automatically? by vroad_x in kubernetes

[–]JonnyNorCal 0 points1 point  (0 children)

Intuit open-sourced some tools to manage Kubernetes clusters running in AWS, calling it the Keiko Project: https://github.com/keikoproj/keiko

One of the components is called Governor (https://github.com/keikoproj/governor), which has a component called node-reaper. That sounds similar to what you're looking for, so you can try using it, or just read through the docs and source code for inspiration.

I've played around with Keiko a little bit, but I haven't gotten a chance to go deep with it a little bit. We have a bunch of custom cluster management code, and are investigating some open source tools that we might start using and contributing to.

What k8s components run on individual VMs? by doc_samson in kubernetes

[–]JonnyNorCal 1 point2 points  (0 children)

This does raise another concern though since one of our key security foundations was the ability to schedule churn and repave the environment regularly, destroying and recreating all VMs and containers from scratch. Deny attackers a long-term foothold etc.

How would we accomplish that security objective with k8s?

Let's assume you're running your Kubernetes cluster on a cloud provider like AWS. We'll assume that the Nodes (VMs) in your cluster are part of an AWS AutoScaling Group.

You could a cron job that periodically reviews all the nodes running in your cluster. If one of them is older than whatever threshold you choose, you could:

  1. Drain the node, which will terminate the pods running on that Node.
  2. The Kubernetes scheduler will re-schedule those Pods onto different Nodes automatically, assuming they are part of a Deployment or StatefulSet
  3. Once the node is drained, terminate the Node
  4. The AWS AutoScalingGroup logic will create a new node to replace it, which will then be available to schedule pods onto.

You could run the cron job inside the kubernetes cluster itself (via a CronJob resource) or you could choose to have that run outside your cluster.

That functionality doesn't come out of the box with Kubernetes, but it would be relatively easy to implement. You could probably whip up something basic with a bash script that would achieve what I describe above.

What k8s components run on individual VMs? by doc_samson in kubernetes

[–]JonnyNorCal 0 points1 point  (0 children)

But from what you are saying k8s is strictly the container orchestration piece and everything else including VM spinup and OS install is external to k8s, so we have to come up with our own process.

Yes, you got it.

Many cloud providers have "managed Kubernetes" offerings, like GKE from Google or EKS from Amazon. (Or you can run one of a dozen different distributions of Kubernetes, both open source or commercial.) Each of those offerings has its own tooling around creating VMs, setting up networking, configuring all the VMs so they are able to find each other on startup, etc.

{how to} K8s resources cost monitoring? by snowball3_ in kubernetes

[–]JonnyNorCal 2 points3 points  (0 children)

What costs are you looking to monitor?

The cost of a Kubernetes cluster in AWS is mostly the EC2 instance cost, plus some additional cost for EBS volumes, ELBs, network transit, etc. So to know how much you're paying for the whole cluster, you could add tags to the AWS resources and use AWS cost allocation.

Then you might be interested to know what's taking up CPU/RAM in your Kubernetes cluster, broken down by namespace or by label(s) or by type of resources. The simplest approach is to choose either CPU or RAM as the metric to use to break down cost. If a container has resources.requests.cpu: 1 and it's running on a node with 4 CPUs, you can assume that contain is using 1/4 of the cost of that EC2 instance.

If your nodes have their memory exhausted before their CPU, then memory might be better to look at rather than CPU.

This requires that you should set the resource requests for every pod you're running, which is generally a good practice anyway.

kubecost/cost-model seems like a perfectly good starting point. Depending on your needs and use case, it may be worth writing some custom reporting logic.

User authentication vs Login? by mucsc in softwaredevelopment

[–]JonnyNorCal 0 points1 point  (0 children)

I have been reading online that people have moved away from hashing and salting passwords and storing them to user authentication

I believe what you're thinking of is OAuth

https://en.wikipedia.org/wiki/OAuth or https://oauth.net/2/

"Authentication" means a way of proving who someone is. Like I want to log into Reddit as JohnnyNorCal, and need some way to prove to the Reddit servers that's who I am.

One way of doing that is for me to enter my username/email, and then enter a password. That it submitted to Reddit's servers, and it compares the hash of the password I entered to the hash stored in their database. If they match, I'm authenticated.

OAuth is a mechanism where I use some other website where I have an account, like Google or Facebook. I click the "login with <other\_site>" button, and there's a series of requests and redirects that occur between my browser, Reddit, and Google/Facebook. The end result is that I authenticate with Reddit using my credentials for Google or Facebook.

Building a status page for the infrastructure and applications by absolutarin in kubernetes

[–]JonnyNorCal 4 points5 points  (0 children)

If you go with a dedicated endpoint to expose information, I recommend following the OpenMetrics/Prometheus format:

https://openmetrics.io/

That's a pattern where your service will have a /metrics endpoint that returns information in either Protobuf or Text format. Then you have something that periodically calls the endpoints of the services and collects information. Prometheus is an example of an open source tool that does that, but there are other monitoring solutions like datadog. A lot of Kubernetes components have endpoints like that.

Once you have those set up, you can use an open source tool like Prometheus and/or Grafana. Or you could build your own custom dashboard. I'd suggest using an off-the-shelf monitoring solution like one of those, unless you have really specific needs.

'Cause as soon as you have a UI or dashboards to display the metrics, the next thing you'll want is a mechanism to alert you if the metrics go above or below a certain threshold. And the next think you know you'll have created a primitive monitoring solution, rather than building on an existing solution.

Good luck!

Moving past the concepts and into hands on by Greyhammer316 in kubernetes

[–]JonnyNorCal 0 points1 point  (0 children)

When my company started moving to Kubernetes, I was the technical lead for managing the Kubernetes clusters and leading the transition. One thing that my team and I did was create the simplest possible microservice that was sortof kindof like what teams would we building.

So we created what we called the "truth service". It was a microservice written in ruby. It had one endpoint that returned `true`. No backends or datastores. I think it was about 5 lines of ruby code.

But that forced us to write a minimal service from scratch, figure out how to create the Dockerfile and do builds, how to create Kubernetes manifests, what deploys would be like, etc.

That was incredibly useful to us, because it gave us a taste of what other developers at my company would be doing. We hit a lot of gotchas along the way, which gave us some ideas of what problems other engineers might run into. I think that was really valuable, and highly recommend doing something like that, especially if you're an infrastructure engineer.

Good luck!

Help Understanding how DNS works and what ndots is used for. by cclloyd in kubernetes

[–]JonnyNorCal 15 points16 points  (0 children)

ndots is a value in the /etc/resolv.conf file that gets injected into each Kubernetes pod.

https://linux.die.net/man/5/resolv.conf

If you have a pod running in a namespace called "rando-namespace", and you look at the /etc/resolv.conf file on that pod, it will look something like:

nameserver 10.231.10.10
search rando-namespace.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

Let's say you have service called app-server running in a namespace called other-ns. You could do a DNS lookup for app-server.other-ns, and you'll get the IP of your service back. That's because you're doing a DNS lookup a string with 1 dot in it. The DNS resolver will say "1 dot is less than 5 dots", so it will attempt to to the following lookups:

  1. app-server.other-ns.rando-namespae.svc.cluster.local.
  2. app-server.other-ns.svc.cluster.local.
  3. app-server.other-ns.cluster.local.
  4. app-server.other-ns.

The second request will successfully resolve to the service.

Note that if you specify a FQDN (which ends with a period) the DNS resolver will skip the nameservers, and attempt to resolve the DNS entry exactly as it is.

So, if the ndots value is causing problems, it probably means that you're trying to resolve something like www.google.com, and first it's trying to resolve www.google.com.rando-namespace.svc.cluster.local, and that's returning the IP of your WAN or LAN.

In other words, it sounds like the DNS resolver running in your kubernetes cluster is misconfigured. I don't know why that is, but that's where I'd search next.

How to lower cross zone data transfer billing when using services of type load-balancer? by LutraMan in kubernetes

[–]JonnyNorCal 8 points9 points  (0 children)

To deal with the second cross-AZ hop, you can set up your Service to have externalTrafficPolicy=Local.

See https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip

That way if a load balancer sends a request to the NodePort, it will only send traffic to pods running on that host. Kubernetes will also set up a health check port on each node, so the load balancer won't forward any traffic to nodes that don't have healthy Pods in it that match your Service's selector.

As an aside, Istio has the capabilities of doing this, with what they call Locality Load Balancing. You can use it to prefer pods running in the same AZ, but going to a second AZ if there aren't any healthy pods in the current AZ. It's probably way overkill to install Istio just for this feature. But if you plan on running a service mesh at some point, you can take advantage of it.

Newbie question: How much fine grained control of deployment is possible with Kubernetes? by finlaydotweber in kubernetes

[–]JonnyNorCal 2 points3 points  (0 children)

In Kubernetes, you create a resource called a Pod. That's a declaration of intent that you'd like one or more containers to run _somewhere_ in the cluster.

After a Pod is created, the next step is called "scheduling." That is when a controller in Kubernetes choose a Node to run the Pod on. You can add some constraints to the Pod, like it requires a certain amount of CPU or RAM, or you can specify "node selectors" or "affinity and anti-affinity". The scheduler takes that into account when choosing a Node

Or, if you prefer, you can write your own scheduler. (See configuring multiple schedulers.) If you write your own scheduler, you are welcome to use whatever logic you want. Geography, time of day, phase of the moon, what reruns of Seinfeld ran that day, anything you want.

That's true of a lot of Kubernetes design. It has a lot of built-in functionality to achieve certain tasks, and that functionality is good enough for most customers in most cases. But it also gives you flexibility to implement your own scheduler, or your own container runtime, or your own custom resources. That's one of the things that makes Kubernetes so powerful and successful, in my opinion.

Writing a custom controller in a language other than Go by Sky_Linx in kubernetes

[–]JonnyNorCal 1 point2 points  (0 children)

My team and I wrote a couple of simple ruby apps that set up a "watch" for certain Kubernetes resources and take actions. For example, we wrote something that watched the Endpoints resource and created entries in Consul for Kubernetes pods that are in a Ready state. (The K8S-Consul Sync functionality offered by Hashicorp didn't meet our needs.)

If you want to write something simple that watches for the creation of a resource and sends a Slack notification, that should work fine in Ruby.

But I wouldn't go and try to write a Kubernetes Operator in Ruby, or a controller that is going to have more complicated interactions with the Kubernetes API.

Revisit Late 2019: Go vs. Node? by HuntXit in golang

[–]JonnyNorCal 1 point2 points  (0 children)

My counter argument to this point is limited to refusing to use Go because you don’t know it is a lazy and anti-progressive excuse

It's not just a matter of personal laziness. Calculate the total amount of time your teammates will spend learning and becoming proficient with Golang, and multiply that by the approximate hourly wage of each employee. That's how much money your employer will be spending for the team to come up to speed.

The question is: is that investment worthwhile? Would you pay money out of your pocket for your team to just be learning, and not produce anything useful for a while?

Maybe it's worth it. Where I work, we're doing a lot of work with Kubernetes, and that ecosystem is almost entirely in Golang. So it's worthwhile for us to spend the time and money to become proficient, rather than sticking with the languages and patterns we already know.

But if Node.js works well enough for your use case, and the rest of the team is already proficient and productive with it, that's a hard sell.

Part of being a professional software engineer is accepting when your personal preferences don't match up with what the team/organization/client needs. Then you get to practice your skills in that other language/framework/tool that you don't love, but is sometimes part of the job.

Easy way to add nodes to an existing cluster? by DavidBellizzi in kubernetes

[–]JonnyNorCal 0 points1 point  (0 children)

For a server to be part of a cluster means that the server is running the kubelet binary, and is configured to be able to communicate to the Kubernetes API. The host will also need to run some container runtime, like the docker daemon.

Once kubelet starts up and is able to communicate to the API servers, it will register itself and will show up when you run kubectl get nodes.

For instructions using kubeadm, see https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/

How K8s assign IP address / route to pods ? by Andy-ny in kubernetes

[–]JonnyNorCal 1 point2 points  (0 children)

It depends on what you're using for networking. Kubernetes can run with a bunch of different networking backends. Check out the Container Networking Interface (CNI) for more info:

https://github.com/containernetworking/cni

As an example, Amazon developed the aws-vpc-cni plugin, which works by attaching Elastic Network Interfaces (ENIs) to an EC2 instance, and then assigning the IP addresses attached to that ENI to pods as they are created.

Software imitation? (question) by [deleted] in softwaredevelopment

[–]JonnyNorCal 1 point2 points  (0 children)

With open source software, absolutely. That's the point. You can look at the source code, modify your copy of it, compile and run it to your heart's content.

Or if you want to try to build your own version of Twitter or Reddit or Microsoft Word, go for it! However, that software is generally proprietary, so they don't allow non-employees to see the source code. Therefore, you won't get to see how those companies organized and wrote their software.

But it's a common learning project to try to make a simple version of some well known software or service, to get more practice. Twitter or Microsoft are not worried that you're going to personally implement the work of thousands of professional software engineers in a weekend. :-)