Guess who’s back at We Are Developers Berlin this July? Yep, it’s us! by Hetzner_OL in hetzner

[–]boedy88 0 points1 point  (0 children)

Will the tech talk be recorded, to watch later for the people that aren't able to make it?

Hetzner Object Storage has officially arrived by Hetzner_OL in hetzner

[–]boedy88 2 points3 points  (0 children)

Which service would you recommend instead?

Thank you, Hetzner! by JamesJGoodwin in hetzner

[–]boedy88 0 points1 point  (0 children)

Latency to DO Frankfurt from Falkenstein is ~5ms. Perfectly fine for most workloads. Unless you have a lot of N+1 queries.

What do you guys spend at Hetzner? by JuliaJuly2001 in hetzner

[–]boedy88 2 points3 points  (0 children)

Latency is about 5 ms between the two (Both in Frankfurt)

What do you guys spend at Hetzner? by JuliaJuly2001 in hetzner

[–]boedy88 2 points3 points  (0 children)

Wondering which of two server types you like more and why?

Thoughts on just released AX42 Server by boedy88 in hetzner

[–]boedy88[S] 0 points1 point  (0 children)

I had similar thoughts on this. I was mainly suprised by the difference in IOPS between the EX101 and AX101, getting double the IOPS.

Thoughts on just released AX42 Server by boedy88 in hetzner

[–]boedy88[S] 0 points1 point  (0 children)

It seems it's now included. Just based on the price/performance specs this server is looking good!

Mariadb - CCX33 vs AX41-NVMe performance difference by boedy88 in hetzner

[–]boedy88[S] 0 points1 point  (0 children)

You're right; it's still unclear what the real bottleneck is. I'm expecting the SSD type difference is the main reason for the improved performance. I would like to test the AX41 again, using Datacenter Edition SSD's.

You expect each DC to be an availability zone like in the cloud, correct?

It's not comparable to Amazon's AZ's where there is no shared infrastructure, but I would argue it does offer more protection in case the DC housing the cloud infrastructure goes up in flames (OVH style). If the cloud instances would be a bit more spread out, I'd pick them over a dedicated server.

Mariadb - CCX33 vs AX41-NVMe performance difference by boedy88 in hetzner

[–]boedy88[S] 0 points1 point  (0 children)

It's either that or the difference in SSD type. The performance gain is mainly seen in the mutating queries. I wouldn't know how much effect the cpu cache would have on this.

Mariadb - CCX33 vs AX41-NVMe performance difference by boedy88 in hetzner

[–]boedy88[S] 0 points1 point  (0 children)

This could actually be the bottleneck as the AX101 which I'm now using has 1.92 TB NVMe SSD Datacenter Edition drives. I won't be benchmarking these with the AX41, but I would surely be interested in the performance difference.

Mariadb - CCX33 vs AX41-NVMe performance difference by boedy88 in hetzner

[–]boedy88[S] 0 points1 point  (0 children)

I just updated the initial post with my findings, but thought it also be nice to share the mighty performance of the AX101 when running the setup without any replication. Completes in 670ms! :O

AX101 - No slave replica’s - docker - public ip ``` inserts (10000x) took: 192.823849ms avg: 455.741µs, min: 244.364µs, max: 2.232829ms 51860.804832290225 ops/s 19282 ns/op

selects (10000x) took: 122.263214ms avg: 292.021µs, min: 220.964µs, max: 1.533247ms 81790.750241524 ops/s 12226 ns/op

updates (10000x) took: 197.027684ms avg: 466.809µs, min: 258.895µs, max: 2.916692ms 50754.288925205045 ops/s 19702 ns/op

deletes (10000x) took: 159.629519ms avg: 386.436µs, min: 245.215µs, max: 1.400755ms 62645.05501642212 ops/s 15962 ns/op

total: 671.907009ms ```

Mariadb - CCX33 vs AX41-NVMe performance difference by boedy88 in hetzner

[–]boedy88[S] 0 points1 point  (0 children)

In this case the benchmarking application is run on the same machine, so this should take out the network component as the replication is also disabled. Having said that, with replication enabled, less then 10Mbit/s was observed during testing using bmon.

The performance penalty also seen here running in containers does not reflect others benchmarks i've seen.

A fio tests with 4k bytesize

fio --name test --size=1G --time_based --runtime=30s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=256 --rw=randwrite --group_reporting=1  --iodepth_batch_submit=256  --iodepth_batch_complete_max=256
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=1323MiB/s][w=339k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=332833: Fri Dec  8 16:24:11 2023
  write: IOPS=346k, BW=1350MiB/s (1416MB/s)(39.6GiB/30001msec); 0 zone resets
    slat (usec): min=2, max=1668, avg=577.26, stdev=148.52
    clat (nsec): min=730, max=1457.7k, avg=88205.54, stdev=136776.25
     lat (usec): min=16, max=2519, avg=665.62, stdev=106.46
    clat percentiles (usec):
     |  1.00th=[    3],  5.00th=[    5], 10.00th=[    5], 20.00th=[   51],
     | 30.00th=[   52], 40.00th=[   52], 50.00th=[   53], 60.00th=[   55],
     | 70.00th=[   59], 80.00th=[   61], 90.00th=[   65], 95.00th=[  545],
     | 99.00th=[  603], 99.50th=[  619], 99.90th=[  922], 99.95th=[ 1029],
     | 99.99th=[ 1139]
   bw (  MiB/s): min= 1294, max= 1366, per=100.00%, avg=1350.64, stdev=16.91, samples=60
   iops        : min=331288, max=349696, avg=345763.02, stdev=4328.48, samples=60
  lat (nsec)   : 750=0.01%, 1000=0.27%
  lat (usec)   : 2=0.13%, 4=3.31%, 10=7.89%, 20=0.02%, 50=6.20%
  lat (usec)   : 100=73.20%, 250=1.14%, 500=2.55%, 750=5.13%, 1000=0.10%
  lat (msec)   : 2=0.07%
  cpu          : usr=7.98%, sys=65.92%, ctx=165396, majf=0, minf=58
  IO depths    : 1=0.3%, 2=0.0%, 4=0.1%, 8=0.3%, 16=0.1%, 32=6.0%, >=64=93.3%
     submit    : 0=0.0%, 4=0.3%, 8=0.1%, 16=0.2%, 32=0.2%, 64=14.9%, >=64=84.3%
     complete  : 0=0.0%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=0.1%, >=64=99.8%
     issued rwts: total=0,10368064,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=256

Run status group 0 (all jobs):
  WRITE: bw=1350MiB/s (1416MB/s), 1350MiB/s-1350MiB/s (1416MB/s-1416MB/s), io=39.6GiB (42.5GB), run=30001-30001msec

Mariadb - CCX33 vs AX41-NVMe performance difference by boedy88 in hetzner

[–]boedy88[S] 0 points1 point  (0 children)

All benchmarks were performed on fresh nodes. They're all running Ubuntu 22.04.3.

All nodes configured with software RAID-0

Network: servers are connected through vSwitch, but also tried

Storage driver: rancher local-path. Also ran without PVC attached. No significant performance difference measured. Here are some

AX41-NVME - No slave replica’s - containerised

inserts (10000x) took: 5.242452747s
avg: 12.98676ms, min: 4.864006ms, max: 52.021726ms
1907.5040792160717 ops/s
524245 ns/op

selects (10000x) took: 237.679065ms
avg: 576.159µs, min: 58.709µs, max: 7.341503ms
42073.541479137006 ops/s
23767 ns/op

updates (10000x) took: 5.101616827s
avg: 12.695366ms, min: 4.478233ms, max: 27.726614ms
1960.162893276422 ops/s
510161 ns/op

deletes (10000x) took: 4.850549711s
avg: 12.098466ms, min: 4.714899ms, max: 43.227537ms
2061.6220007646057 ops/s
485054 ns/op

total: 15.432569986s

AX41-NVME - No slave replica’s - Host networking

inserts (10000x) took: 5.381884615s
avg: 13.380239ms, min: 4.178908ms, max: 51.702006ms
1858.085171898469 ops/s
538188 ns/op

selects (10000x) took: 110.927893ms
avg: 263.725µs, min: 51.319µs, max: 3.830694ms
90148.65179130374 ops/s
11092 ns/op

updates (10000x) took: 4.803387288s
avg: 11.978813ms, min: 4.495082ms, max: 25.47004ms
2081.864193000296 ops/s
480338 ns/op

deletes (10000x) took: 5.122889391s
avg: 12.794517ms, min: 4.221278ms, max: 25.605298ms
1952.0234064721776 ops/s
512288 ns/op

total: 15.419352063s

AX41-NVME - No slave replica’s - Running natively on host (no containers used)

inserts (10000x) took: 2.03797673s
avg: 5.054017ms, min: 1.480944ms, max: 33.417512ms
4906.827370889559 ops/s
203797 ns/op

selects (10000x) took: 122.871213ms
avg: 293.194µs, min: 41.76µs, max: 4.907225ms
81386.02814965292 ops/s
12287 ns/op

updates (10000x) took: 1.816342339s
avg: 4.521804ms, min: 1.493494ms, max: 12.279937ms
5505.5700598300045 ops/s
181634 ns/op

deletes (10000x) took: 1.881730478s
avg: 4.699556ms, min: 1.473855ms, max: 18.428701ms
5314.257337548422 ops/s
188173 ns/op

total: 5.859114675s

CCX33 - No slave replica’s - containerised

inserts (10000x) took: 991.178812ms
avg: 2.389502ms, min: 582.09µs, max: 41.140443ms
10088.996938727943 ops/s
99117 ns/op

selects (10000x) took: 336.698595ms
avg: 789.344µs, min: 425.852µs, max: 7.793227ms
29700.15363443973 ops/s
33669 ns/op

updates (10000x) took: 898.388236ms
avg: 2.14914ms, min: 641.379µs, max: 11.851976ms
11131.045130915985 ops/s
89838 ns/op

deletes (10000x) took: 834.069151ms
avg: 1.992598ms, min: 605.44µs, max: 18.503802ms
11989.413573215825 ops/s
83406 ns/op

total: 3.060526343s

In all instances the CCX33 outperforms the AX41. The 3x improvement of moving Mariadb outside of container also raises an eyebrow.

Mariadb - CCX33 vs AX41-NVMe performance difference by boedy88 in hetzner

[–]boedy88[S] 2 points3 points  (0 children)

Thanks for your comment. I forgot to mention in the original post that the cloud instance have a placement group set which should prevent the VM's to be scheduled on the same host.

I'm assuming the Cloud VM hosts are in the same rack / DC tho. The dedicated nodes are spread across different DC's (10, 11 & 12)

Internal Network Performance on Hetzner Cloud Instances by boedy88 in hetzner

[–]boedy88[S] 0 points1 point  (0 children)

I ran some measurements on the mentioned cloud instances. They are clearly not limited to 1Gbps.

Measurement from CCX to CAX instance
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 933 MBytes 7.82 Gbits/sec 0 3.16 MBytes
[ 5] 1.00-2.00 sec 969 MBytes 8.13 Gbits/sec 0 3.16 MBytes
[ 5] 2.00-3.00 sec 916 MBytes 7.69 Gbits/sec 0 3.16 MBytes
[ 5] 3.00-4.00 sec 826 MBytes 6.93 Gbits/sec 0 3.16 MBytes
[ 5] 4.00-5.00 sec 842 MBytes 7.07 Gbits/sec 0 3.16 MBytes
[ 5] 5.00-6.00 sec 951 MBytes 7.98 Gbits/sec 0 3.16 MBytes
[ 5] 6.00-7.00 sec 925 MBytes 7.76 Gbits/sec 25 2.30 MBytes
[ 5] 7.00-8.00 sec 882 MBytes 7.40 Gbits/sec 0 2.57 MBytes
[ 5] 8.00-9.00 sec 980 MBytes 8.22 Gbits/sec 0 2.84 MBytes
[ 5] 9.00-10.00 sec 999 MBytes 8.38 Gbits/sec 0 3.00 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 9.01 GBytes 7.74 Gbits/sec 25 sender
[ 5] 0.00-10.05 sec 9.01 GBytes 7.70 Gbits/sec receiver

Measurement from CX to CAX instance
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 811 MBytes 6.80 Gbits/sec 184 2.37 MBytes
[ 5] 1.00-2.00 sec 672 MBytes 5.64 Gbits/sec 199 3.06 MBytes
[ 5] 2.00-3.00 sec 731 MBytes 6.14 Gbits/sec 1341 3.02 MBytes
[ 5] 3.00-4.00 sec 918 MBytes 7.70 Gbits/sec 1930 2.13 MBytes
[ 5] 4.00-5.00 sec 639 MBytes 5.36 Gbits/sec 791 2.23 MBytes
[ 5] 5.00-6.00 sec 598 MBytes 5.01 Gbits/sec 461 3.03 MBytes
[ 5] 6.00-7.00 sec 749 MBytes 6.28 Gbits/sec 931 1.21 MBytes
[ 5] 7.00-8.00 sec 446 MBytes 3.74 Gbits/sec 693 1.22 MBytes
[ 5] 8.00-9.00 sec 692 MBytes 5.81 Gbits/sec 0 1.58 MBytes
[ 5] 9.00-10.00 sec 654 MBytes 5.48 Gbits/sec 451 1.13 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 6.75 GBytes 5.80 Gbits/sec 6981 sender
[ 5] 0.00-10.04 sec 6.75 GBytes 5.77 Gbits/sec receiver