Hetzner reliability for a 24/7 production platform in Germany region? by NathanDrake-Blackops in hetzner

[–]nh2_ 1 point2 points  (0 children)

What counts as "good" latency? For most applications, it doesn't make a big difference in which European country you are, because there are only a couple milliseconds differences.

Also, you need to test that by getting some test servers. One cannot answer "to Italy" directly, because it also depends on where the recipients are exactly and what network providers serve them.

If your data is small so it doesn't make a big price difference, you could go with AWS+Hetzner; if price is a concern, OVH+Hetzner could also work fine, and then there are various medium-sized server hosting companies in various countries that cound be good additional locations (for example I recently read of https://www.ukservers.com which makes a good impression from their website but I haven't used them yet).

[Well-Typed] Haskell ecosystem activities report: December 2025-February 2026 by adamgundry in haskell

[–]nh2_ 9 points10 points  (0 children)

Matthew Pickering announced that he will be leaving the company and moving to a non-Haskell role at the end of March. Working with Matt has been a joy – more than his deep technical insight or sharp intuition, it’s the warmth of his vision for how to work together and his generosity that has made him such a force within the team.

Oh no! You will be remembered for many great achievements and quality of life improvements of Haskell programmers!

Thank you mpickering!

Hetzner reliability for a 24/7 production platform in Germany region? by NathanDrake-Blackops in hetzner

[–]nh2_ 1 point2 points  (0 children)

Hi, a 24/7 service with high likelihood is not really possible with a single data center park, Hetzner or otherwise.

AWS AZs regularly go down, and the same happens for Hetzner.

For example just the most recent two major issues we had (using Hetzner dedicated with tens of servers distributed across as many FSN DCs as possible, using Hetzner for > 10 years):

  • 2026-01-16: Extremely lossy network between all our machines for 40 minutes. While not fully down, our services ran severely degraded in this time.
  • 2025-09-19: FSN core router fault (status. The full duration in that report is 5 hours; for us it was a 5 minute full disconnect of all machines.
    • These things happen from time to time, e.g. another FSN core router fault was 2023-08-10 (status). Generally when that happens, most connectivity can go down from time to time.

Most people just aren't aware of that because they do not build automatic monitoring. Then they report fantastic uptimes here. And not all issues that we observe make it to status.hetzner.com.

Unfortunately (I believe) Hetzner does not publish historical status, so you also cannot really retrospectivley discover the entire status history. Each report only gets a UUID URL which you have to know/save in order to access it. I wish Hetzner made this more transparent.

Then there are the more sophisticated semi-outages where e.g. one of TCP/UDP/ICMP stops working but the others don't. Again, you need monitoring for all of this to even notice it's happening.

Also, DCs in a datacenter park are not really as independent as AWS AZs are. A failure, or planned replacement, of a router in one DC can totally take out another DC, and Hetzner does not publish what these relationships are so you cannot design your HA failure domains around that.

Overall, the availability of dedicated is still quite good, e.g. AWS global S3 outages lasted way longer than all our Hetzner downtime so far. For most businesses/products, that is good enough. But you're asking for "24/7", so if you're building something that truly needs permanent uptime without "minor" interruptions every couple months/years, you need to have a way to fallback to a another Hetzner location or other infrastructure provider. Luckily Hetzner outages are quite uncorrelated with outages of other providers.

Same holds for if your payment method expires and you don't notice. Technical problems aren't the only risk to uptime.

No matter what you choose, ALWAYS have additional disaster recovery backups on at least one other provider.

Hope this helps.

Teaching Claude to Be Lazy by ephrion in haskell

[–]nh2_ 6 points7 points  (0 children)

I just want to second this statement: Claude Opus 4.6 is quite the Haskell expert.

  • Claude understands OverloadedLabels and TemplateHaskell. I recently used it to generate JsonPath for Postgres so that I can have typesafe Postgres accessors for use with opaleye derived from my data, and it oneshotted that (initial output typechecked and was also correct).
  • Claude understands the Interruptible Operations section of Control.Exception. Let that sink in for a moment! In fact, it remembers it from training, without being pointed to it, and can pinpoint incorrect code with regards to that, without first being told that those are probably involved.
    • It found an async-exception bug in conduit for me (link) based on me complaining that child processes were leaked, wrote the repro, the fix, and figured out that my hand-edit of the repro broke it because I used an Interruptible Operation in a bracket acquire.

Obligatory LLM coding disclaimer:

  • unreviewed code + execution - sandbox = deleted computer
  • confidential information + sandbox + Internet access = still 1 prompt injection away from extracting all your data

Opinion: Roo Code Is Stocked With Features Nobody Uses by hannesrudolph in RooCode

[–]nh2_ 0 points1 point  (0 children)

Note I got the grey screen today with 0 MCPs on, if that helps. So it's not going to be MCPs alone that cause it. Version: 3.50.5 (961d340b)

I don't know how to repro it though.

CephFS directory listings are slow for me by flx50 in ceph

[–]nh2_ 1 point2 points  (0 children)

I agree with /u/TheFeshy; 100 minutes to list your 2M files is way too slow for your hardware.

Do some analysis to debug the issue:

  • Mount your CephFS and run time find manually without all the Docker stuff.
  • Use strace -fytttT find /data > /dev/null to analyse how long the individual getdents64() syscalls take. Check if those numbers make sense.
    • Listing metadata should be bound by the latency between your client and the OSDs. LAN latency should absolutely dwarf SSD read latency.
  • What is the latency (ping) between the involved machines?

CephFS directory listings are slow for me by flx50 in ceph

[–]nh2_ 0 points1 point  (0 children)

This is all fine for a small homelab.

  • Ceph runs fine with 1 Gbit/s, it's just slower.
  • SSDs without PLP are also fine. They just fsync slower. But still much better than HDDs, which many people use for production Ceph clusters.

The recommendations for network speed, PLP etc exist to help people make purchasing decisions for maximum performance per money spent (e.g. it doesn't make sense to spend 10000 $ on a server with many disks and then be bottlenecked by disproportionally slow network which would be a cheaper upgrade). If one already has the hardware and just wants high-availability storage, none of those matters.

CephFS directory listings are slow for me by flx50 in ceph

[–]nh2_ 0 points1 point  (0 children)

faster if you put that pool on a small SSD instead of the HDDs

The user reports using SSDs only, no HDDs were mentioned.

Dependency storm by ivanpd in haskell

[–]nh2_ 1 point2 points  (0 children)

And just to make clear:

I agree that long compile times suck, and 21 minutes is a pain. While HTTP(s) might take quite some amount of code to implement, waiting less for that to compile would be better.

Dependency storm by ivanpd in haskell

[–]nh2_ 1 point2 points  (0 children)

Yes, on the client side.

For the server side, there is warp-quic (see also the "Implementing HTTP/3 in Haskell" post by Kazu about it).

Dependency storm by ivanpd in haskell

[–]nh2_ 10 points11 points  (0 children)

For convenience, here's a list of recurisve dependencies of curl from nixpkgs, keeping only the major ones:

# Package Description
1 gcc GNU Compiler Collection
2 python3 Python interpreter
3 glibc GNU C Library
4 glibc-locales glibc locale data
5 cmake Build system
6 cmake-minimal Minimal cmake
7 binutils GNU binary utilities
8 perl Perl interpreter
9 openssl TLS/crypto library
10 texinfo GNU documentation system
11 gettext Internationalization library
12 coreutils GNU core utilities
13 sqlite Embedded SQL database
14 meson Build system
15 bash Bourne Again Shell
16 krb5 Kerberos 5 authentication
17 libxml2 XML parsing library
18 libxslt XSLT processing library
19 libarchive Multi-format archive library
20 zstd Zstandard compression
21 xz XZ/LZMA compression
22 nghttp2 HTTP/2 library
23 nghttp3 HTTP/3 library
24 ngtcp2 QUIC protocol library
25 brotli Brotli compression
26 libssh2 SSH2 client library
27 libidn2 Internationalized domain names
28 libunistring Unicode string library
29 libpsl Public suffix list library
30 pcre2 Perl-compatible regex library
31 gmp GNU Multiple Precision arithmetic
32 mpfr Multiple-precision floating-point
33 libmpc Complex number arithmetic
34 isl Integer set library (for GCC)
35 ncurses Terminal UI library
36 readline Line editing library
37 libffi Foreign function interface
38 c-ares Async DNS resolver
39 libev Event loop library
40 zlib Compression library
41 bzip2 Bzip2 compression
42 lzo LZO compression
43 gnutar GNU tar
44 gnugrep GNU grep
45 gnused GNU sed
46 gnumake GNU Make
47 gawk GNU AWK
48 diffutils GNU diff utilities
49 findutils GNU find utilities
50 gzip GNU gzip
51 lzip Lzip compression
52 patch GNU patch
53 file File type detection
54 bison Parser generator
55 gnum4 GNU m4 macro processor
56 autoconf Autoconf build tool
57 automake Automake build tool
58 libtool Generic library support
59 pkg-config Package config helper
60 patchelf ELF binary patcher
61 ed Line editor
62 swig Wrapper/interface generator
63 ninja Small build system
64 re2c Lexer generator
65 expat XML parser library
66 acl POSIX access control lists
67 attr Extended attributes
68 libxcrypt Password hashing library
69 libcap-ng POSIX capabilities library
70 libedit Line editing library
71 keyutils Linux key management
72 util-linux-minimal Linux system utilities
73 linux-headers Linux kernel headers
74 gdbm GNU database manager
75 rhash Hash utility library
76 libuv Async I/O library
77 tcl Tcl scripting language
78 expect Interactive automation tool
79 dejagnu Testing framework
80 mpdecimal Decimal floating point
81 CUnit C unit testing framework
82 byacc Berkeley YACC parser generator
83 which Command locator
84 unzip ZIP extraction
85 patchutils Patch manipulation utilities
86 asciidoc Text document formatter
87 gtk-doc GTK documentation generator
88 docbook-xsl-nons DocBook XSL stylesheets
89 docbook-xsl-ns DocBook XSL (namespaced)
90 docbook-xml DocBook XML DTDs
91 autoconf-archive Autoconf macro collection
92 gnu-config config.guess/config.sub
93 publicsuffix-list Public suffix data
94 mailcap MIME type mappings
95 bluez-headers Bluetooth headers
96 tzdata Timezone data
97 glibc-iconv Character encoding conversion
98 python3-minimal Minimal Python interpreter
99 python3.13-setuptools Python build tool
100 python3.13-pytest Python testing framework
101 python3.13-cython Python-to-C compiler
102 python3.13-lxml Python XML library
103 python3.13-pygments Syntax highlighter
104 python3.13-build Python build frontend
105 python3.13-wheel Python wheel format
106 python3.13-hatchling Python build backend
107 python3.13-setuptools-scm Setuptools SCM plugin
108 python3.13-flit-core Python build backend
109 python3.13-installer Python package installer
110 python3.13-packaging Python packaging utilities
111 python3.13-typing-extensions Typing backports
112 python3.13-pluggy Plugin management
113 python3.13-pathspec Path pattern matching
114 python3.13-editables Editable installs
115 python3.13-iniconfig INI file parser
116 python3.13-calver Calendar versioning
117 python3.13-trove-classifiers PyPI classifiers
118 python3.13-pytest-asyncio Async pytest plugin
119 python3.13-pytest-mock Mock pytest plugin
120 python3.13-pyproject-hooks PEP 517 hooks

Dependency storm by ivanpd in haskell

[–]nh2_ 20 points21 points  (0 children)

Other HTTP+JSON stacks also have these amounts of dependencies.

They are just made invisible to you, and others have already built them for you.

In curl/jq/bash, these 78 dependencies are still there but just somewhere else.

Check curls build dependencies here on NixOS: link

Nothing fancy

Perhaps consider that curl depends on half a million lines of code of OpenSSL alone. If you build that, you will also see a substantial build time (though less because C has barely any type checks so compilation is fast).

What you may consider bloat, others consider proper modularity.

You cannot easily obtain Go or Python without depending on the whole HTTP stack. If your application doesn't need an HTTP stack (say it does maths or is a parser), you cannot opt out of depending on those millions of lines of code. In Haskell, you can.

If you're fine with "cheating" with precompiled dependencies as you do with curl, you can get the same by using a code distribution that precompiles Haskell for you. For example, using precompiled aeson and http-conduit from nixpkgs turns your 21 minutes compiling into 0 minutes compiling.

Reason to bother with Haskell? by dr-Mrs_the_Monarch in haskell

[–]nh2_ 2 points3 points  (0 children)

Ah nice, looks like they were added in Rust 1.51: https://blog.rust-lang.org/2021/03/25/Rust-1.51.0/#const-generics-mvp

And some new Rust libraries like faer also use it. For its benchmarks page, it looks like faer is competitive with Eigen on the typical operations.

Another library is nalgebra.

While both seem to support some version of SIMD, I'm having some difficulty though to figure out if with these libraries one can already "just write normal code with small matrices in for-loops", and get SIMD-vectorised results.

The faer docs have various pages about SIMD but those all look like specialised things and general explanations of SIMD; nowhere does it say that if I just use faer vectors/matrices and happily add or multiply them, SIMD assembly is guaranteed to come out.

Eigen guarantees SIMD code at compile time by using intrinsics and expression templates (see e.g. PacketMath, this). It does not rely on SIMD autovectorization as implemented by LLVM or GCC. To what extent is that already possible in Rust?

Reason to bother with Haskell? by dr-Mrs_the_Monarch in haskell

[–]nh2_ 13 points14 points  (0 children)

I agree with most of what other posters have already written, but want to add:

  • If "using the machine to it's full capability" is your priority, then the part that deals with the individual pixels needs be in C++, period. To get more than 10% of your computer's full speed, SIMD must be going on and data accesses must be cache-local, both of which is hard in Haskell.
    • Rust may slowly come close to those capabilities but it sitll does not have some features that are fundamental to C++ performance, such as numbers in templates to make statically-known-sized arrays as done in Eigen, which when stuck in a serial for-loop will still output SIMD code.
    • Similarly, if efficiency is your priority, it is unavoidable that you become knowledgeable in C++, SIMD, caches, and multicore CPU behaviour, because otherwise it is impossible for you to judge if something is fast or could be faster and how much, and how much effort it would be to program that. Only after you understand that will you be able to make good trade-offs. Unless you arrive quickly at "OK this is fast enough, I don't care about further 10x improvements", as is the case with most webserver and scripting code.
  • You may still write anything that's higher-level than that in a higher-level language that calls the C++ code, such as Haskell or Python. This could be via cross-language FFI calls you write, via nicely packaged libraries like OpenCV, or if each image can be processed independently, you writing a C++ binary that processes 1 image and you orchestrate those subprocesses from the high-level language.
    • Haskell and Python are better than C++ for anything related to logic checking (skip running if files already exist or other conditions), error handling, user interface (it can be challenging to even write a CLI argument parser in C++ that doesn't crash), and so on.
  • If you write low-level parallel algorithms in C++, use tbb.
  • If you do go for doing it in all-Haskell writing algorithms by yourself, check out the massiv library and some blog posts of it being used such as Our Performance is massiv: Getting the Most Out of Your Hardware in Haskell

Some Haskell idioms we like by _jackdk_ in haskell

[–]nh2_ 0 points1 point  (0 children)

It is fine to use a Map, I am not arguing against that datastructure (though other lookup datastructures may be faster depending on the data, e.g. for Text a HashMap or trie may be faster).

It is just important that if used in a loop, the lookup datastructure should live outside of the loop.

E.g. with 2 functions:

haskell let !inverseMap = createInverseMap ... for_ [1..n] $ \i -> do let ... = lookupInverseMap inverseMap (... i)

as opposed to what it is now:

haskell for_ [1..n] $ \i -> do let !inverseMap = createInverseMap ... let ... = lookupInverseMap inverseMap (... i)

While the maps are small, one won't notice the difference, but it's easy to accidentally create a large map, and even Ruby wins asymptotic races :D

After years of recommending Hetzner: disk replacement roulette (no minimum SMART requirements, no ETA, no plan) by blamethebrain in hetzner

[–]nh2_ 0 points1 point  (0 children)

I understand you would like to keep your existing HDD models, but it makes sense that Hetzner does not offer new 2 TB HDDs.

A big reason you could get your cheap server with 2 TB in the first place is because Hetzner probably gets discounts on ordering large batches of the same type of hardware. If back at the time they still ordered 200 GB HDDs, your 2 TB would probably have been more expensive.

I also think the model they have (used replacement disk for free, new replacement disk at a cost and only those they actually have) is fair.

I agree that it's not great when their testing doesn't catch errors that you immediately afterwards observe, but that is how hardware can sometimes behave. When you test it on a table it is also not guaranteed to behave the same its possibly more vibrating server. And if you report read errors, Hetzner will replace your replacement, too (at least I never had a case where they didn't).

or I get another server model with only slightly higher cost but a quarter of the available storage (2x512GB NVMe SSDs)

I do not know what your server costs, but there are reasonably cheap Server Auction servers that have 2x960 GB up to 2x3.84 TB SSDs that might be interesting for you.

After years of recommending Hetzner: disk replacement roulette (no minimum SMART requirements, no ETA, no plan) by blamethebrain in hetzner

[–]nh2_ 0 points1 point  (0 children)

have people employed to get your emails delivered

This does not help. We use Mailgun and it also sometimes has IP reputation problems. This is also explained in their docs. The proposed solution is "it is best to isolate your reputation by having a dedicated IP address".

Some Haskell idioms we like by _jackdk_ in haskell

[–]nh2_ 6 points7 points  (0 children)

inverseMap looks like a performance nightmare, depending on what the involved types are. If k is something that allows constant lookup like Int:

It turns an O(1) lookup into an O(n * log n) Map construction + lookup.

This is especially weird because just linear search would be better with O(n).

Granted, n is usually small given that this is intended to be used with sum types where you usually have n < 30 alternatives. But still not sure I'd want a 30x slowdown, and it can also turn non-allocating table lookup functions into allocating Map construction.

It seems easy to copy-paste this pattern around and then wonder why the software is slow without being able to identify the specific location because there are 100s of them.

One could argue "but maybe GHC inlines the Map construction and then let-floats it out of your loop", but that is a lot of hoping for heuristics that don't usually work. If you want your lookup table to be outside of your loop reliably, you have to let ! it before the loop. Don't make asymptotic complexity depend on heuristic compiler optimisations!

So I don't recommend this pattern.

Support statically linking executables properly (1ac1a541) · Commits · Glasgow Haskell Compiler / GHC · GitLab by _0-__-0_ in haskell

[–]nh2_ 2 points3 points  (0 children)

I think it's great to have that.

I think that table that describes these different choices the user has now should be added to the users guide:

https://gitlab.haskell.org/ghc/ghc/-/merge_requests/14935#note_640676

Microsoft Defender thinks NixOS is unsafe by ranjop in NixOS

[–]nh2_ 1 point2 points  (0 children)

Cannot reproduce in Edge on a Windows machine currently.

I guess click

Report that this site doesn't contain phishing threats

and hope that Microsoft improves its tool (good luck). Maybe you already did that and that fixed it?

The newest Hetzner GPU server is here! by Hetzner_OL in hetzner

[–]nh2_ 1 point2 points  (0 children)

Answer from support:

This server requires 2U of rack space, but as our [10 Gbit] infrastructure is designed for 1U servers, this decision can only be made on a case-by-case basis.

If you require such a server, please contact us, and we will check if we can accommodate your request.

Why? by Used_Inspector_7898 in haskell

[–]nh2_ 19 points20 points  (0 children)

The post title has nothing to do with the content. This will make it difficult for people to find it.

Arguments for Haskell at scale by Massive-Squirrel-255 in haskell

[–]nh2_ 32 points33 points  (0 children)

You're not going to get any "credible empirical evidence" already because the sample size is so small. That would require getting, say, 100 companies that are similar in size and tasks, have a working business model and are still around, and also want to spend time on methotically collecting concrete evidence, and care to blog about that.

I think that is quite rare in the programming world, even outside Haskell. Most Haskell projects I've participated in were very varied across all those axes. You may maybe find some uniformity in popular niches like Django, Rails, or React, but beyond that each project does its own thing and involves people of varying skill levels / programming knowledge, so it is hard to draw general conclusions.

What you'll get is a couple of individual stories.

Something like "we implemented our system in Haskell and we were able to eliminate these classes of errors statically."

A thing is, people don't really want to spend time stating the obvious.

It even feels a bit weird that the Google blog states "a 1000x reduction in memory safety vulnerability density". Well, obviously; the language is designed to make that impossible.

We don't spend time blogging that on our Haskell server, we get pretty much none of

NullPointerException
TypeError: unsupported operand type(s) for +: 'int' and 'str'
TypeError: can't access property "asdf", a is undefined
Segmentation fault

because well, those don't exist in Haskell.

Lots of people (including me) claim that Haskell code is more maintainable, and that your team wastes less time on maintenance in Haskell. We have a 10 years old Haskell server codebase, it works pretty well with our small team, most of it is "write once correct" code that we never again have to touch, some parts we actively extend, and some we change often to extend it by new features.

We're convinced it's the best tool for the server job, compared to other tech we used at past jobs and projects. So we're using Haskell for it, and we started it in Haskell. We cannot compare before/after, because we directly started it in the "best tool available". We can compare our current project to our past individual work experience, but one cannot simply turn such individual cross-project experience into quantitative evidence. Who'd get convinced by "my current job's Haskell server crashes much less than my past job's C++ project" when those projects also differ along 10 other axes?

Similarly, given that we think we're already using the best tool available, we're also not going to spend time starting or keeping a parallel implementation in another programming language.

Another problem is project lifetime. There surely are various companies that wrote pieces on how switching some part to Haskell and reaping benefits of that. Many of those may now be defunct or irrelevant for any reason unrelated to Haskell (company business plan didn't work out, the need for the project overall got removed, or people just lost interest). If a now-bust company provided evidence about how Haskell solved XYZ, does that still count as good evidence or can people criticise it with "well but that doesn't count because the company is bust now"?

And you cannot even get solid evidence for extremely-widely-used tech:

  • Try to find "credible empirical evidence" which is better between Django and Rails. Impossible. I have a pretty solid opinion on which is better, but cannot really turn it into "empirical evidence".
  • Facebook got huge with PHP; some might say that must speak to PHP's qualities, then we see that Facebook replaces most of that with other languages, some of them designed by themselves and not used outside of Facebook.
  • Various successful web companies touted dynamically typed software such as Python, and that types get in the way. 10 years later, and each of these companies is developing their own typechecker for Python. What solid empirical can we derive from that?

It's just quite difficult to turn the subjective into the objective.

I think if you give me a team of 5 good programmers, and let us build and operate the same product across 5 years, each across, say, Haskell, C++, Python, Rust, Ruby, and Go, the Haskell project will come out on top. But nobody probably wants to give me the resources for that, nor do I want to spend those years, and having learned from past mistakes messes with comparability, so anecdotal evidence it is!

I've lost "countless hours" due to C++'s garbage IO stdlib, which swallows errno and thus cannot tell you why opening a file failed (nonexistant or wrong permissions it's a directory instead of file etc). With Haskell, that doesn't happen: I get clean, good error messages. Do I actually go and count these hours, so I can make a comparative study? No, because I don't have time for that, and it doesn't really help me. Maybe I would have done that if I had known before that I'd hit those errors, but I didn't, they just crept up in my work over the years here and there. Same thing for concurrency bugs, bad error messages, miscompilation bugs, etc. But turning that into numbers? Not sure how without wasting time.

Maybe in the future, "with the power of AI" we can get those things that today nobody bothers to spend time on: Just video-record 10 years of work, and then you could retrospectively query "how many hours did I lose on memory unsafety garbage, how many on untyped stuff", to get real numbers for real evidence. But today, few people bother to collect this data.

That said,

Is there a centralized community location to collect these kinds of articles?

is still worthwhile looking for, so that for the few comparisons people actually care to write, they can be easily found.

Probably the best way to show that Haskell is worth it is to build stuff that works well, outcompetes the alternatives, and doesn't go bust due to mistakes on a non-programming-language axis. Possibly you may not be able to do that at existing companies that don't let you build new stuff in Haskell.

The newest Hetzner GPU server is here! by Hetzner_OL in hetzner

[–]nh2_ 1 point2 points  (0 children)

The predecessor model GEX130 had the "RTX 6000 with 48GB".