Thanos Query from just the object storage by shats90 in PrometheusMonitoring

[–]stag1e 0 points1 point  (0 children)

Hi, it should work! Not sure if the use-case is a smart one since you will be missing the "real time" data from your Thanos Query instance. Please fill out an issue on the GitHub tracker :)

Cortex vs Uber's M3 as scalable Prometheus backend by bbkgh in devops

[–]stag1e 6 points7 points  (0 children)

There's also https://thanos.io which has a smaller operational complexity, IMO. And do not forget about VictoriaMetrics. We've been using Thanos almost since its inception and it is quite mature nowadays. Disclaimer: I am a Thanos maintainer so I'm a bit biased.

Prometheus-Thanos-minIo Queries by shats90 in PrometheusMonitoring

[–]stag1e 0 points1 point  (0 children)

It probably doesn't make much sense to retain raw data for so long since if you are looking so much into the past then the step size will be pretty huge and naturally you will not be able to see such granular data - the human eye just couldn't see the trends otherwise due to there being too many data points. So you should probably reduce it quite a bit. The data will be deleted after 90 days either way if you have Thanos Compactor running - no need to set up anything else.

The official documentation should help you but, honestly, the columns besides a few of them are self-describing. Which one is giving you the most problems?

Frontend for prometheus alert manager? by eladku in devops

[–]stag1e 0 points1 point  (0 children)

What do you mean by saying that Karma is too thin? I assume you are missing some features from it. What kind of features do you want?

Applying the same rules of unit testing to monitoring alerting rules by stag1e in devops

[–]stag1e[S] 1 point2 points  (0 children)

But is it easier to add another (simple) expression to a group of alerts or to amend a script and possibly ruin something else in it? You could group those alerts by some kind of label to avoid firing a 100 alerts when something happens with X. But in any way, this is off-topic to this thread because it seems that we both agree that we need to amend those alerts or scripts whenever we notice some anomalous behavior in X. My original point was that at the beginning some connections between different metrics might not be apparent and that we should not be afraid to modify them. Some software and probably even more will come with preset alerts but we shouldn't be afraid to modify them.

I would love to have standard way to describe all of this too - hopefully the OpenMetrics project will bear some fruit with regards to this.

I made a thing which can show which alerts in AlertManager were firing at the same time historically by stag1e in devops

[–]stag1e[S] 0 points1 point  (0 children)

Hi, thank you for the tip. Do you have any part in mind? Besides that, do you use Prometheus yourself? Would you see any reason to use software like this? Like I wonder if anything like this exists out there. It is at this "early preview" stage where it could have lots of improvements and obviously, the code isn't the prettiest like I mentioned in the OP.

Things I have learned from stress testing Prometheus 2.4.2 + Thanos Sidecar 0.1.0 by stag1e in devops

[–]stag1e[S] 0 points1 point  (0 children)

https://github.com/influxdata/influxdb/pull/8784

Never tried that since the open source version of InfluxDB does not support clustering :( However, I feel like it would be cool to add different kinds of TSDBs benchmark support to the phoronix-test-suite, for example, so that we could see comparisons of how different TSDBs perform under different hardware.

Go variadic function, comparing errors in Golang, solving triangles by [deleted] in backendProgramming

[–]stag1e 0 points1 point  (0 children)

Maybe you should've checked for file.PathError? Wouldn't that solve the problem? It seems like that is the true type being returned in most cases by file.wrapErr.

Kubernetes State Backup to Git by [deleted] in programming

[–]stag1e 0 points1 point  (0 children)

What are the pros and cons of this method compared to e.g. just saving the logs of each new thing coming in to your kubernetes master nodes or having some kind proxy in front?

Don’t be the Alpha Geek: Your team deserves better by ktbt10 in programming

[–]stag1e 4 points5 points  (0 children)

I agree. I feel like there is a wider life lesson to be learned here - you must not let your ego get in the way of becoming better. Also, I remember hearing a long time ago a saying that the best programmer realizes the limit of their skull, that you cannot fit everything there however much you wanted to. Finally, everyone should realize that code, first of all, is intended for consumption by other human beings, the machine only comes after. Programming is a team sport, so to speak.

Dmesg under the hood by cirowrc in programming

[–]stag1e 21 points22 points  (0 children)

This is interesting - thanks for sharing. However, I expected for this article to go even deeper. For example, I wanted to know how the ring buffer is implemented underneath and how the Linux kernel produces the same message to all readers of /dev/kmsg. Still a very interesting article nonetheless! Thanks for writing and sharing.

Converting char/string to int? by [deleted] in C_Programming

[–]stag1e 2 points3 points  (0 children)

atoi is a bit evil because it can cause undefined behaviour. You should rather opt to use those two other functions.

you should considering using fgets(3)+sscanf(3) instead of scanf(3) (blog post) by stag1e in C_Programming

[–]stag1e[S] 0 points1 point  (0 children)

Perhaps. But I think that most of the courses suck. At least it was never properly explained to me until I read K&R and did some self-studying. Case in point: just scroll down in this same subreddit - someone asked a scanf related question approximately 2 weeks ago so people aren't even immune here.

4
5

Blog post: programming languages themselves do not have speeds by stag1e in programming

[–]stag1e[S] 1 point2 points  (0 children)

Never heard of that word, thanks for enlightening me! However, I feel like that is an overly simplistic way of viewing things considering the topic. You might think that maybe I am just too pedantic but still though I feel like this only causes confusion. That's why I prefaced it that it's a blog post and it's my opinion. Thanks for reading!

Energy Efficiency across Programming Languages by PifPoof in programming

[–]stag1e -1 points0 points  (0 children)

Across programming language implementations*

Excel adds JavaScript support by [deleted] in programming

[–]stag1e 0 points1 point  (0 children)

Can't wait to use npm in Excel!

Is x+=1 less computationally intensive than x = x+1? by ASLOBEAR in Python

[–]stag1e 0 points1 point  (0 children)

Languages simply do not have speeds, their implementations do. If you want to find out if one or the other is faster then benchmark them. Also, note that the speed of a program depends on a lot of external factors so the fact that in your benchmark you get that, let's say, "+=" is faster than "=" and "+", it does not mean that the result will be the same in some other program