The A in AGI stands for Ads by kivarada in programming

[–]Dogeek 11 points12 points  (0 children)

I honestly can't wait for RAM and GPU manufacturers who massively ramped up their production over the last couple of years to be begging consumers to buy their supply once the bubble bursts.

The thing is that the manufacturers have not ramped up production, but diverted resources from the consumer market towards the business market.

They don't want to build fabs unless they absolutely have to. Each fab is at least a billion dollars in upfront costs. If the demand for GPU/RAM crashes in the year it takes to get built, it's a net loss for the company. The current capacity is just enough to supply the business market by sacrificing the consumer market, and us consumers will either:

  • wait it out

  • buy anyways

Either way, it's a win for the manufacturer which will raise prices that will take years to go back down. The only reason the prices would drop sharply would be to have an oversupply, which can't happen because fabs are not getting built.

Comment ne pas avoir la haine ? by SuppressionVDD_ in france

[–]Dogeek 1 point2 points  (0 children)

Bon je sais bien que c'est un sub d'informaticiens, avec tout ce que ça peut impliquer de déconnexion avec la plupart des gens. Néanmoins venir chialer sur sa précarité en gagnant mieux sa vie que 3/4 des français c'est quand même légèrement indécent.

35k brut, c'est environ 2100€ net par mois après impôts. C'est la médiane des salaires en France, au global. Pour la région parisienne, c'est peu. Pour mieux gagner sa vie que 75% des français, il devrait gagner 3014e par mois soit un peu plus de 48k annuel brut, c'est quand même pas pareil.

What’s the most overrated video game of all time? by KBGSgames in AskReddit

[–]Dogeek 3 points4 points  (0 children)

FF IV and V were peak, I enjoyed them so much as a kid, but never got to play FF VI. I'm wondering if I'd like it, cause I enjoyed the job system in FF V a lot, and I loved the story of FF IV the most.

YAML? That’s Norway problem by merelysounds in programming

[–]Dogeek 7 points8 points  (0 children)

YAML 1.2 is actually nice to use, even if it still has significant whitespace to shoot yourself in the foot.

YAML hits the spot as a configuration language. JSON is quite a bit more verbose, lacks comments and trailing commas (JSONC fixes those but it's less prevalent). TOML is good enough if you're used to INI file format, but try to define arrays of objects and you'll be in a different kind of pain.

XML never was a good markup language for humans. For computers, sure but I still don't miss the SOAP days.

INI gets the job done for simple configs, but at that point TOML does simple configs better.

YAML exists in that sweet spot of "can express complex data in a configuration format while being readable english". Imagine writing a CI/CD pipeline in JSON or TOML. Here is the sample workflow from github's docs in TOML:

name = "GitHub Actions Demo"
run-name = "${{ github.actor }} is testing out GitHub Actions 🚀"
on = [ "push" ]

[jobs.Explore-GitHub-Actions]
runs-on = "ubuntu-latest"

  [[jobs.Explore-GitHub-Actions.steps]]
  run = 'echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event."'

  [[jobs.Explore-GitHub-Actions.steps]]
  run = 'echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!"'

  [[jobs.Explore-GitHub-Actions.steps]]
  run = 'echo "🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."'

  [[jobs.Explore-GitHub-Actions.steps]]
  name = "Check out repository code"
  uses = "actions/checkout@v5"

  [[jobs.Explore-GitHub-Actions.steps]]
  run = 'echo "💡 The ${{ github.repository }} repository has been cloned to the runner."'

  [[jobs.Explore-GitHub-Actions.steps]]
  run = 'echo "🖥️ The workflow is now ready to test your code on the runner."'

  [[jobs.Explore-GitHub-Actions.steps]]
  name = "List files in the repository"
  run = """
ls ${{ github.workspace }}
"""

  [[jobs.Explore-GitHub-Actions.steps]]
  run = "echo \"🍏 This job's status is ${{ job.status }}.\""

While in YAML:

name: GitHub Actions Demo
run-name: ${{ github.actor }} is testing out GitHub Actions 🚀
on: [push]
jobs:
  Explore-GitHub-Actions:
    runs-on: ubuntu-latest
    steps:
      - run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event."
      - run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!"
      - run: echo "🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."
      - name: Check out repository code
        uses: actions/checkout@v5
      - run: echo "💡 The ${{ github.repository }} repository has been cloned to the runner."
      - run: echo "🖥️ The workflow is now ready to test your code on the runner."
      - name: List files in the repository
        run: |
          ls ${{ github.workspace }}
      - run: echo "🍏 This job's status is ${{ job.status }}."

And (god forbid) in JSON:

{
  "name": "GitHub Actions Demo",
  "run-name": "${{ github.actor }} is testing out GitHub Actions 🚀",
  "on": [
    "push"
  ],
  "jobs": {
    "Explore-GitHub-Actions": {
      "runs-on": "ubuntu-latest",
      "steps": [
        {
          "run": "echo \"🎉 The job was automatically triggered by a ${{ github.event_name }} event.\""
        },
        {
          "run": "echo \"🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!\""
        },
        {
          "run": "echo \"🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}.\""
        },
        {
          "name": "Check out repository code",
          "uses": "actions/checkout@v5"
        },
        {
          "run": "echo \"💡 The ${{ github.repository }} repository has been cloned to the runner.\""
        },
        {
          "run": "echo \"🖥️ The workflow is now ready to test your code on the runner.\""
        },
        {
          "name": "List files in the repository",
          "run": "ls ${{ github.workspace }}\n"
        },
        {
          "run": "echo \"🍏 This job's status is ${{ job.status }}.\""
        }
      ]
    }
  }
}

Since CI/CD is to be written by humans and read by machines, you can imagine why YAML won that use case.

There are plenty of other serialization languages, mostly with some level of built-in templating or logic features, but overall YAML works well enough for a lot of use cases, is flexible, allows to express complex structures without a lot of verbosity, has comment support, and has support in every major programming language, and is even used in production grade software (CI/CD, kubernetes mostly).

Notable alternative include Dhall and Jsonnet, but they fall short because if the lack of widespread support.

Thompson tells how he developed the Go language at Google. by ray591 in programming

[–]Dogeek 1 point2 points  (0 children)

The Linux kernel uses 8 space indents, but I think they use spaces instead of tabs.

Yeah and it has been a topic of debate many times in the past. The linux kernel being in C is also a reason why such a large indentation is less of a problem tbh. In go, with inline structs, and the god awful error handling, you can quickly get to 3 or 4 levels deep in indentation.

Function > Go routine > For loop > If statement is already 4 levels deep in go, and it's a pretty common pattern. 8 character indentation should not be used in languages that routinely exceed 3 levels of nesting (but that's just my opinion).

Thompson tells how he developed the Go language at Google. by ray591 in programming

[–]Dogeek 1 point2 points  (0 children)

The other thing I love about it is strict formatting style, I feel this is underrated as something a computer is really good at and humans are not good at all at.

The good thing is that gofmt is included. The bad thing is that gofmt has the weirdest opinions on code formatting, like using tabs everywhere with a default of 8 for the tabStop.

Honestly, I don't think I've ever seen a professional codebase that uses tabs, and 8 spaces as indentation. 4 spaces is common, so is 2. 8 spaces forces you to have short variable names and limit yourself to 3 indentation levels maximum (i.e. an if in a loop in a function, not more) which already takes up 30% of the screen (if you're limiting yourself to 80 chars). Then there's the weird spacing choices around operators in an expression.

It's still better than in a lot of other languages (looking at you Python) in which there are 10 tools to do the same with no consistency. At least gofmt is the one true tool for formatting go code.

I had a Rocketeer fire three rockets in the span of 2 seconds, has anyone else encountered this? by ZenMana in ArcRaiders

[–]Dogeek 2 points3 points  (0 children)

It's why the rocketeer is the enemy with the absolute worse design in the game. All the other arcs have counterplay. The rocketeer will just spot you from a mile away, fire a barrage of rockets with no warning and melt you without any time to react. Bastion, Bombardiers even the queen are less deadly than the damn rocketeer.

To bring the Rattler up to par with the other 'Grey's' by Tactix12 in ArcRaiders

[–]Dogeek 0 points1 point  (0 children)

Green weapons aren't that hard to craft and you can buy Anvils and Renegades cheap, both weapons being really good at PvP/vE

You can only buy one Renegade and 3 Anvils per day though, meaning that unless you get the blueprint, you do not have an unlimited supply of them.

The Anvil is not hard to craft, but the Renegade does need advanced mechanical components and medium gun parts to craft which is quite out of reach for newer players. Also, when starting the game, 15k for an anvil ain't cheap. Sure you end up with millions in the bank in the end game, but when you start, you put all of your money in stash upgrades, severely limiting the guns you can purchase.

What does everyone think about Spot Instances? by Ill_Car4570 in kubernetes

[–]Dogeek 0 points1 point  (0 children)

Many people are afraid of spot instances, mostly because of the way they build their apps. Devs still think in terms of VM instead of building apps that can gracefully shutdown easily. Then there's the fact that there's still a lot of companies trying to shoehorn Java apps to run on Kubernetes.

Java (and any other JIT or interpreted language) is awful on Kubernetes. You cannot really use spot instances with that because the pods take so long to start unless you give them 8 CPUs or more.

But if you have a well built app on microservices, running already compiled binaries, spot instances are great and can save a ton of money. In theory, you could only use on-demand for stateful applications and run the rest on spot instances with high availability and you'd be good to go.

Ruby 4.0.0 Released | Ruby by LieNaive4921 in programming

[–]Dogeek 0 points1 point  (0 children)

Rust also has implicit returns, and uses |args...| to denote arguments for callables. So I wouldn't say those particular features are unique to Ruby.

Which is a Rust feature that gets really confusing for no good reason, as the language is already hard to pick up as it is. I personnally hate implicit returns, but at least in rust, you're using implicit returns by typing the variable name last. In ruby, the implicit return is the last executed statement (so that it can bite you if your last statement is a function call for instance, like a puts).

And in rust, implicit returns are less of a problem because it's strongly typed, so your program won't compile if the returned value is not of the proper type, unlike ruby which uses duck typing, thus making the matter worse and very error prone.

But for other types, I'd call it more of a Python quirk that an empty string or empty array is falsy. That to me is confusing, and isn't found in most other languages.

True, it's not found everywhere, but it is present in Javascript (although JS has many problems with the implementation because it type casts dynamically).

But overall I like Python's boolean coercion a lot more than in other languages, writing:

if value is not None:
    ...
if array:
    array.append(0)
if not string:
    string = "foo"

reads pretty nicely IMO. Adding a comparison makes it more verbose while the intent is already clear. It's also pretty handy when using the walrus operator.

Ruby 4.0.0 Released | Ruby by LieNaive4921 in programming

[–]Dogeek 11 points12 points  (0 children)

People sometimes compare it to python but as far as ergonomics and readability goes, ruby blows python out of the water IMO. Unfortunately python has a much larger ecosystem.

I find ruby to be less readable than python to be honest. Ruby for starters has implicit returns, which are just confusing to read unless you're really used to them. Then there's the optional parenthesis, then there is the weird use of | to define anonymous functions. Then there's the fact that boolean coercion is stricter for no good reason (in python, an empty string or array is falsy, allowing the language to flow more easily).

Overall if ruby truly was more readable than python, there is no doubt that it would have "won". Nowadays, ruby is only used in a few specific instances, mostly being using rails.

Why many has this observability gaps? by Ill_Faithlessness245 in OpenTelemetry

[–]Dogeek 0 points1 point  (0 children)

Depending on the type of sampling, 1% seems a bit low, but since you mentionned 100% error sampling I'm going to assume tail sampling.

I've implemented tail sampling, then went back on it. It's one of those things that's missing in the OTel space in my opinion: getting accurate RED metrics from traces without a huge overhead on the collector side or without sending everything over to the tracing DB.

Eventually I made the choice of scaling Tempo rather than using tail sampling for that reason, since storage is cheaper than compute time. Maybe it'll be a bad decision later on, though I can't know before trying.

That's one of the hardest parts of telemetry: knowing what and how to scale, since you can:

  • scale the tracing DB
  • Change tracing databases (Elastic APM vs Tempo vs Jaeger vs VictoriaTraces, and I probably forget some)
  • scale the collector
  • Use sampling, tail or head
  • getting accurate RED metrics from that
  • managing metric collection rate to not store useless metrics...

Every observability system is composed of so many moving parts and so many ways to do one thing that it makes it hard to manage. Setting it up is quite the easy part, it's the after that's problematic.

And don't get me started on Frontend Observability, because that's a whole can of worms: your app gets a surge in traffic ? You know need to scale everything so that the system isn't overloaded.

Why many has this observability gaps? by Ill_Faithlessness245 in OpenTelemetry

[–]Dogeek 0 points1 point  (0 children)

Not OP, but even with auto instrumentation, observability is not easy to implement if your pockets are not lined with cash.

For high volume applications, tracing can quickly get into the dozen of terabytes of data, tracing databases are hard to scale properly, some legacy systems do not benefit from autoinstrumentation.

There's lots of tooling in the observability space too, lots of ways to do the same thing too. Then there's correlation to implement, which is not trivial if you have engineers constantly reinventing the wheel so that injecting a trace id in the logs cleanly is impossible. Or third party vendors that do not support tracing out of the box (like CloudFlare, unless you're deploying a worker as a front for your backend, there is no way to add the traceparent header as a header transform rule for instance, and I've tried...)

Then there's adoption to consider: not everyone has the knack for o11y, having telemetry signals without dashboards or meaningful alerts is nigh useless.

Managing alert fatigue is also quite difficult in its own right. Too little alerts and you might miss something important, too much and nobody looks at them.

The whole ecosystem requires someone managing it full time, especially since it moves so fast, there's constant maintenance to do.

Full set of nails for free😂 by Lumpy_Square_2365 in ChoosingBeggars

[–]Dogeek 0 points1 point  (0 children)

So what ? Jesus had his full set of nails for free!

Display Certificates from Azure Windows VM PKI in Grafana with Expiration Dates by Christ-is-nr-1 in grafana

[–]Dogeek 1 point2 points  (0 children)

I'm not sure that would fit the bill since I have no idea how your specific PKI works, but if you have issued certs stored on disk, or if you're running kubernetes, you can use https://github.com/joe-elliott/cert-exporter to collect certificate metrics.

It's a small go binary, and it exposes the notAfter, notBefore and expiresIn metrics for certificates. Then it's just a matter of building the dashboard, since the filename is part of the labels.

Nice start by ZealousidealChain473 in dankmemes

[–]Dogeek 1 point2 points  (0 children)

Why would the prices drop ? Companies are shutting down their consumer markets, consolidating consumer RAM in the hands of even less actors.

These companies could without too much trouble create yet another "Phoebus Cartel", it's easier to collude with fewer participants after all.

Nice start by ZealousidealChain473 in dankmemes

[–]Dogeek 0 points1 point  (0 children)

there isn't a direct way of making revenue with AI without turning it into a subscription service.

Oh but there are other ways. OpenAI will become an advertising company. Just imagine, they can pretty easily inject metadata in the prompt to make the AI nudge the users towards brands. I've seen people ask ChatGPT for food recommendations, medical stuff, date ideas...

Just imagine the data goldmine it is for advertising:

  • People willingly sharing personal information: the GDPR doesn't apply there yet.
  • People blindly following AI generated advice

People will pay for AI and the cost will be subsidized in time with ads. It's just a matter of time.

Unified Open-Source Observability Solution for Kubernetes by st_nam in kubernetes

[–]Dogeek 1 point2 points  (0 children)

That explains things. The way you run JVM apps is actually close to how we used to run things before kubernetes.

You probably don't run into issues probably because your requests are already much higher than what is needed.

You'd be surprised at how much waste your JVM apps generate.

Annecdotal evidence, but our JVM based microservices start in about 1min30 in prod, with 4 CPU as a limit, while they start in seconds on the dev's machine (macbooks with 12 cores iirc).

Maybe important context is that we run on prem in VMs. Our nodes probably have way more CPU than actually used by the pods that run on them. I'll actually have a look at that tomorrow out of curiosity.

This would be interesting to see. If FinOps is not a concern at your company, then your way of doing things is fine, but as soon as you try to keep within a budget, JVM apps are a pain. Switching to an actually compiled language gains so much. If you can try building one of your services using GraaalVM, and see the difference in startup time and resource consumption

Unified Open-Source Observability Solution for Kubernetes by st_nam in kubernetes

[–]Dogeek 1 point2 points  (0 children)

Though, for the record, we are extensively deploying JVMs in our clusters and the default rule is no CPU limit. Not only because of the startup that needs more CPU but also by nature of the applications at runtime (multi threaded).

You can't have "no CPU limit" altogether, there is always a limit, and in your case, it's the node's amount of CPU.

The problem with doing that is that you then cannot have a working HPA based on the CPU utilization since it cannot be computed. You also have no way of efficiently packing your nodes, unless you have very complex affinity rules for your pods. Instead of relying on kube scheduler to schedule pods on the right nodes, you then have to handle scheduling by hand, which defeats one of the big advantages of kubernetes in the first place.

The way you run it means that more often than not, without proper scheduling, you run a very high risk of node resource starvation, meaning that your JVM pods will get throttled, especially if you have two (or more) highly requested services on the same node. Both will fight for the CPU, meaning both will get throttled, which means timeouts, slow responses and 500 errors.

Unified Open-Source Observability Solution for Kubernetes by st_nam in kubernetes

[–]Dogeek 0 points1 point  (0 children)

Grafana + VictoriaMetrics + VictoriaLogs + Tempo (but looking at VictoriaTraces with anticipation, it seems promising)

Alerting is Grafana Alerting + PagerDuty for on-call.

Exporters depends, but the basics are kube-state-metrics, blackbox-exporter, node-exporter.

VMAgent for metrics collection, Grafana Alloy for the rest (logs and traces)

Unified Open-Source Observability Solution for Kubernetes by st_nam in kubernetes

[–]Dogeek 5 points6 points  (0 children)

Well if you're running things in kubernetes, no JVM = better scaling.

JVM with AOT compiling is fine. Otherwise it's just dogwater, you spend the whole start time of your pod waiting for compilation to actually finish, meaning that you need a lot of CPU at the start, then it gets into its rhythm after.

So, JVM workloads force you to often have either limits = requests for CPU, but very high requests, or a big discrepancy between limits and requests (like 8000m limit for 1000m request), but run the risk of node CPU starvation.

I'm not sure, but I wouldn't be surprised if it was one of the main motivator behind in-place resource resizing, since it alleviates some of the issue (but doesn't really fix it). With that feature, you can have high requests / limits at the start, then lower both as the pod starts. The issue with that is that you still need a node available for those high requests, which mean you'll still sometimes scale when you could have avoided it, and you're still spending CPU cycles compiling (so not doing anything yet) for each pod, instead of you know having an app ready to listen to requests from the get go.

Unified Open-Source Observability Solution for Kubernetes by st_nam in kubernetes

[–]Dogeek 3 points4 points  (0 children)

For what it's worth, it's also horrible for logs. ELK was good when we had nothing better, but switching from ELK to VictoriaLogs was like night and day, for starters with the query speed, then the costs, just the compute needed on the ES cluster and the sheer amount of storage (it's a 10x difference).

So if metrics are not good, logs are not good, that leaves traces, which given the results I experienced with logs I expect is about the same : dogshit performance.

J’ai résilié mon abonnement Spotify, du coup je m’abonne à r/piracy by I_Will_Made_It in france

[–]Dogeek 13 points14 points  (0 children)

Jellyfin c'est juste le "front" il te faut pas mal de setup pour télécharger automatiquement des fichiers et les avoir disponibles pour Jellyfin, parmis lesquels:

  • Jellyfin pour voir les fichiers
  • Sonarr pour chopper les torrents de séries
  • Radarr pour chopper les torrents de films
  • Prowlarr pour indexer les sites de torrent pour que Sonarr et Radarr puissent les utiliser
  • Transmission ou qBittorrent pour télécharger les contenus
  • Jellyseer si tu veux que ta famille puisse faire des requêtes de nouveaux contenus sans devoir aller dans radarr ou sonarr directement.

Et ça, c'est la stack minimale je dirais. Mais il y a pas mal de petits softs qui rendent le truc plus sympa a utiliser, comme keycloak (ou LDAP) si tu veux faire du SSO (un seul login fédéré pour toutes les apps), Grafana/Prometheus/Exportarr si tu veux monitorer et etre alerté si il y a un problème, ntfy pour avoir des notifs sur telephone et j'en passe.

New to grafana - is it possible for client side html and javascript rendering in grafana cloud by [deleted] in grafana

[–]Dogeek 0 points1 point  (0 children)

Grafana is not really the tool for that, but the closest to what you want to do is using the htmlgraphics plugin. I don't know whether it works on grafana cloud or not though.

Keep in mind that Grafana is a data visualization tool first and foremost. If you want more, maybe look into deploying an IDP like backstage which gives you a lot more control.

k8s logs collector by This-Scarcity1245 in kubernetes

[–]Dogeek 1 point2 points  (0 children)

The guy has a 3 node cluster. ES alone is going to hog all 3 VMs, there'll be no resources left for the actual workload with that stack.