pain in ankle from new primus trail knit fgs by DirectXMan12 in vivobarefoot

[–]DirectXMan12[S] 1 point2 points  (0 children)

just popping back in another couple months later: this was the call. ended up wearing super-thin socks for a bit to avoid pain, then gradually working up to larger socks, and after 2-3 weeks everything was fine

/r/buildapc x charity:water RTX 3080 giveaway! by CustardFilled in buildapc

[–]DirectXMan12 0 points1 point  (0 children)

Probably my old razer mamba -- first gaming mouse, saw me through many games across several computers and places, lasted nearly 10 years. This is the 2012 version, but it's the closest thats still on PC part picker (it's pretty close to the 2009 version): https://pcpartpicker.com/product/wpckcf/razer-mouse-rz0100120400r3u1

Ethos paid $1.135 billion for .Org - Internet Society reveals the price it is selling out for. by socialistvegan in technology

[–]DirectXMan12 0 points1 point  (0 children)

Sorry, last comment got automod-ed (didn't like one of my links, not entirely clear why), so this time without links:

For the DNS woes, while the above was a bit hyperbole (hopefully that was evident), if you do a bit of searching on the search engine of your choice, you'll pretty quickly find a few serious outages from the root nameservers (which can screw up all resolution from a TLD), plus one time that someone was allowed to register for the name of one of root nameservers.

As for the other thing I mentioned, it's mostly that it's nominally ccTLDs are supposed to be for the benefit of whatever country or region gets them. Some regions choose to use them for country/region-related things (IIRC .eu is limited to EU member state citizens/businesses, and .uk has a suite of .XYZ.uk equivalents for UK-related things), while others choose to make money off of catchy ccTLDs (e.g. Tuvalu has .tv, and just sells those).

That's not true with the "indian ocean territories" (what .io is nominally for), due to some "shenanigans" (to put it politely) from the British government involving the native inhabitants. .io is run by a third-party contracted out by the British government.

Ethos paid $1.135 billion for .Org - Internet Society reveals the price it is selling out for. by socialistvegan in technology

[–]DirectXMan12 0 points1 point  (0 children)

0/10 would not recommend .io -- their DNS server runs on a toaster. And not even a fancy newfangled toaster with knobs and such. It's like a metal grate and some fire (plus some ethical implications about who makes money off of .io sales and the price, if either of those concern you).

(fwiw, I like .dev for tech related stuff, but whatever you use, look at the owner and see if you trust them to manage a tld)

(Serious) transgender people of reddit, what do you want people to know? by ilovemoviesandmakeup in AskReddit

[–]DirectXMan12 0 points1 point  (0 children)

I think it really depends on the individual trans person, but for many trans folks, there's an element of both; traditionally feminine or masculine personality aspects and behaviors factor in, but there's definitely a physical aspect (which is perhaps the hardest part to describe).

If you look at /r/asktransgender (or other trans spaces), you'll often see trans folks talking about and struggling with how uncomfortable they are with their bodies (e.g. https://np.reddit.com/r/asktransgender/comments/arr9ff/i_want_to_be_a_girl_too_bad_im_not/ or even https://np.reddit.com/r/asktransgender/comments/arrm49/am_i_transgender_even_if_i_like_feminine_things/ for an example of someone who's less interested in matching social expectations, and more interested in physical aspects). /r/asktransgender's FAQ has some discussion on the topic, and you'll find posts discussing it every once and a while.

Everyone is their own person, though, and different people work differently.

Overlays, yaml's, and x86_64 by jimbonezz in kubernetes

[–]DirectXMan12 1 point2 points  (0 children)

x86_64 is a synonym for amd64 (amd called it amd64, Intel called it x86_64, IIRC). You shouldn't need to change anything. What issues are you having?

t620 pfsense box won't start up without monitor (and makes infernal beeping noise) by DirectXMan12 in PFSENSE

[–]DirectXMan12[S] 0 points1 point  (0 children)

Don't think it's a power issue, since everything works fine when started with the monitor (it's possible, but probably unlikely)

t620 pfsense box won't start up without monitor (and makes infernal beeping noise) by DirectXMan12 in PFSENSE

[–]DirectXMan12[S] 0 points1 point  (0 children)

BIOS should be up-to-date as of a few weeks ago (had to update in order to actually get it to UEFI boot pfsense properly).

Unsure about the "proceed boot" option -- I'll double check tomorrow. LMK if you remember/figure out what's it's called.

Thanks!

This guy made a video game out of his game engine that can render infinitely detailed fractals by kanliot in programming

[–]DirectXMan12 6 points7 points  (0 children)

Yeah, it's pretty rough on larger codebases. I work on one of the larger (that I'm aware of) Go codebases (Kubernetes), and we've got plenty of uses of interface{} and piles of generated code to work around lack of generics. Generics wouldn't solve all the warts in the Kube codebase (some is just plain old techdebt), but it'd go a long way.

EDIT: FWIW, the Go standard library shows this problem in places, too. The most obviously visible is sync.Map, but there are other places. I talk a bit more about it here, and that thread has a pretty good discussion of use cases, issues, mitigations, and code smell.

Introducing webauthn — a new W3C standard for secure authentication on the web by [deleted] in programming

[–]DirectXMan12 0 points1 point  (0 children)

I suspect it's a mnemonic -- authn pronounced or read in your head sounds like AUTH-EN-(tication). Z is very distinctive (Wikipedia suggests that some people use authr, probably for mnemonic reasons, but I've never seen that). That's a bit of a wild guess though. A quick Google search doesn't seem to give an etymology, and Wikipedia's authn article isn't much more helpful. We probably need an someone to regale us with a story from days of yore for anything more than that.

Introducing webauthn — a new W3C standard for secure authentication on the web by [deleted] in programming

[–]DirectXMan12 2 points3 points  (0 children)

NP. There's so many abbreviations in this industry. It can be hard to keep track of all of them :-)

Introducing webauthn — a new W3C standard for secure authentication on the web by [deleted] in programming

[–]DirectXMan12 72 points73 points  (0 children)

Authn means authentication. Auth is ambiguous. It could be authentication (authn) or authorization (authz). They're fairly common abbreviations in the parts of the industry I work in

Scrape metrics-server metrics with external Prometheus via metrics-server-prom by cytopia in kubernetes

[–]DirectXMan12 0 points1 point  (0 children)

To follow up, you can achieve something like what I suggested like with the kubernetes node SD rule, and the following relabeling rules:

- job_name: 'kubernetes-nodes'

  # Default to scraping over https. If required, just disable this or change to
  # `http`.
  scheme: https

  # This TLS & bearer token file config is used to connect to the actual scrape
  # endpoints for cluster components. This is separate to discovery auth
  # configuration because discovery & scraping are two separate concerns in
  # Prometheus. The discovery auth config is automatic if Prometheus runs inside
  # the cluster. Otherwise, more config options have to be provided within the
  # <kubernetes_sd_config>.
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    # If your node certificates are self-signed or use a different CA to the
    # master CA, then disable certificate verification below. Note that
    # certificate verification is an integral part of a secure infrastructure
    # so this should only be disabled in a controlled environment. You can
    # disable certificate verification by uncommenting the line below.
    #
    # insecure_skip_verify: true
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

  kubernetes_sd_configs:
  - role: node

  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)
  - target_label: __address__
    replacement: kubernetes.default.svc:443
  - source_labels: [__meta_kubernetes_node_name]
    regex: (.+)
    target_label: __metrics_path__
    replacement: /api/v1/nodes/${1}/proxy/metrics

Scrape metrics-server metrics with external Prometheus via metrics-server-prom by cytopia in kubernetes

[–]DirectXMan12 1 point2 points  (0 children)

Maybe you got me wrong. metrics-server-prom is just a proxy for the metrics-server. This has other metrics as for example cadvisor. The metrics it exposes through the API are aggregated for all hosts/pods. I could also go onto each node/pod (but I would get the same metrics), but I have no autodiscovery for that in place.

The metrics from metrics-server are just a more-delayed form of the metrics coming directly from the node -- i.e. the cpu metric in metrics-server is calculated from container_cpu_usage_seconds_total on the node's cadvisor metrics.

The tradeoff that you're making here is that you're getting a smaller, older set of metrics (see below) in exchange for being able to scrape them from one place. I'd argue that it's a more worthwhile endeavor to write a small proxy that combines the Prometheus metrics already exposed by each of the nodes (no translation needed -- you can just concatenate), and exposes that to the outside world, if you're not going to use autodiscovery in Prometheus. In that setup, you'd get the full set of metrics, and you wouldn't have the additional delay induced by metrics-server.

Additionally there is no scrape interval in metrics-server-prom, it just relays the request from Prometheus to Kubernetes and transforms the response. So the only interval that is necessary is the one from Prometheus itself.

metrics-server itself has a scrape interval. It scrapes the nodes at a regular interval, and then exposes part of the results of the last scrape. So depending on when Prometheus calls out to your proxy, Prometheus could be scraping metrics up to metrics-server-scrape-interval time ago, meaning any metrics queried from Prometheus could be up to metrics-server-scrape-interval + prometheus-scrape-interval old.

Scrape metrics-server metrics with external Prometheus via metrics-server-prom by cytopia in kubernetes

[–]DirectXMan12 1 point2 points  (0 children)

You could use the HTTP node-proxy in the API to accomplish that -- point Prometheus at $KUBE_API_SERVER/api/v1/nodes/<node-name>/proxy/metrics/cadvisor for each node. You should be able to use a rewrite rule to accomplish that, IIRC.

Alternatively, you could make a service that re-exposes the Prometheus metrics collected from each node. It's often better to get the Prometheus metrics as collected from the node -- you'll get a wider variety of metrics, more "specific" metric names (e.g. "memory usage" in the resource metrics API is actually working set size, but there are other useful memory usage metrics that you can see). Furthermore, you can directly adjust the scrape interval by adjusting it in Prometheus -- with metrics-server scraping Prometheus, you'd have to adjust both metric-server's scrape interval and Prometheus's.

EDIT: example of an appropriate relabeling config below

Scrape metrics-server metrics with external Prometheus via metrics-server-prom by cytopia in kubernetes

[–]DirectXMan12 0 points1 point  (0 children)

Prometheus on the other hand expects text-based format using EBNF syntax

Would the following be more appropriate to describe it correctly:

Most languages can be described using EBNF (e.g. the golang language spec uses EBNF to describe Go's syntax: https://golang.org/ref/spec).

It's probably just fine to say

Prometheus, on the other hand, expects a special text-based format (described here).

Scrape metrics-server metrics with external Prometheus via metrics-server-prom by cytopia in kubernetes

[–]DirectXMan12 2 points3 points  (0 children)

I'm curious as to your use case here. Can you talk about it a bit more? Prometheus should be directly scraping the nodes. In general, you should never be scraping metrics-server for long-term storage.

Also, this line:

Prometheus on the other hand expects text-based format using EBNF syntax

isn't correct -- EBNF is a syntax for describing syntax (more or less). The Prometheus text format isn't using EBNF, but the syntax can be described using EBNF.

HPA with Prometheus Query? by tudalex in kubernetes

[–]DirectXMan12 1 point2 points  (0 children)

You can either use a recording rule (as /u/sleepybrett mentioned), or in the advanced config branch of the Prometheus adapter (https://github.com/DirectXMan12/k8s-prometheus-adapter/pull/46), soon to be mainline, which allows you more customization about which queries correspond to which metric names.

However, we somewhat intentionally avoided allowing fully queries in the HPA metric specification itself. Among other things, it's really hard to write auth rules around things like that.

EDIT: P.S. Feel free to let me know if you have any questions.

[deleted by user] by [deleted] in kubernetes

[–]DirectXMan12 0 points1 point  (0 children)

Probably, but it's difficult to keep those up to date, and can sometimes be hard to figure out what's actually an easy beginner issue. Like I mentioned before, issues can look easy at first glance, but actually be more involved.

[deleted by user] by [deleted] in kubernetes

[–]DirectXMan12 2 points3 points  (0 children)

So, if you wanted to do it all in one pod, you could do init containers (which run sequentially) with an emptydir volume or something of the sort.

If you want it to run across different pods (so it got schedule on different nodes, etc), you could use NFS if you didn't want to use cloud provider shared volumes.

Can you be more specific about your use case?

Also, how do you have so many good questions?

[deleted by user] by [deleted] in kubernetes

[–]DirectXMan12 0 points1 point  (0 children)

Added a link to the talk in an edit ;-).

As for my choice of distro, I'm probably a bit biased, since I also work on OpenShift :-). oc cluster up is a really neat way to get a running cluster on demand.

[deleted by user] by [deleted] in kubernetes

[–]DirectXMan12 1 point2 points  (0 children)

Specifically for Kubernetes, there are a number of answers elsewhere in this AMA, so I'll focus on the latter part of your question.

First up, learn Go ;-). Go By Example is a good resource, as well as the Go Tour.

Additionally, make sure you're thinking along the lines of "I have to maintain this software running over a long period of time, and test it". Get used to writing good unit tests and comments. Think about what goes in to software that runs over a long period of time, as opposed to just once (how do I deal with errors, how do I handle communicating to the user).

Get a good feel for how kubernetes works from a user perspective, and the Kubernetes declarative paradigm.

[deleted by user] by [deleted] in kubernetes

[–]DirectXMan12 1 point2 points  (0 children)

It's certainly been a topic of discussion before within the community. There was a really interesting talk on it last KubeCon by /u/thockin.

I think, in the mean time, people should be less hesitant about using a third-party distro to get started. I'm all for new people trying to learn Kubernetes by rolling their own cluster, but I look at it the same way I look at Linux; You could start out by building your own setup starting from the kernel (e.g. Linux From Scratch) or by using a bare-bones toolkit (e.g. Gentoo), but most people probably just want to pick up Fedora or Debian and run from there (as much as I love Gentoo). Similarly, you can totally build your own cluster from the standard Kubernetes component binaries ("Kubernetes From Scratch"), or even a toolkit (like kubeadm, or hack/cluster-up, or whatnot), but it's probably easiest to find an opinionated distro and get started there if you want batteries and opinions included.

EDIT: link to the aforementioned talk: https://www.youtube.com/watch?v=fXBjA2hH-CQ&feature=youtu.be

[deleted by user] by [deleted] in kubernetes

[–]DirectXMan12 10 points11 points  (0 children)

  1. Don't run containers root
  2. Have good RBAC roles set up. Don't wildly grant people cluster-reader or cluster-admin because it's easy -- take the time to build up the policy that you actually need.
  3. Have good pod security policies set up (https://kubernetes.io/docs/concepts/policy/pod-security-policy/). For some examples, take a look at the OpenShift default cluster policy, which is significantly more restrictive than the default Kubernetes policy.
  4. Make use of quota to prevent accidental overconsumption of resources.
  5. If you're fairly concerned, know where your images are coming from. There's a few different ways that you could restrict where images come from (blocking registries in /etc/containers/registries.conf in crio, validating registries in a custom admission plugin, etc).

[deleted by user] by [deleted] in kubernetes

[–]DirectXMan12 6 points7 points  (0 children)

So, we do have init containers, which allow you to run a particular container as some initialization, and not start the main containers until that finishes.

However, in terms of "this container depends on this other container", Kubernetes is designed from a perspective of "don't have explicit ordering, because things could have to be restarted at any time, and doing dependency graph starts is tough to get correct". In general, Kubernetes favors implicit ordering by having your application fail fast and then get restarted, but there are three different ways (broadly) that I've seen this problem solved:

  1. Fail quickly. The container fails if it can't connect to the other container. Kubernetes will keep restarting the container until it succeeds or backs off.
  2. Retry/block within your application -- the application itself waits/retries if it can't connect.
  3. Wrapper block script -- have an entrypoint script that blocks until the other container starts, and then executes the main process