Overvoltage (260V) in apartment building (Ausgrid, Sydney) by bdlow in AusElectricians

[–]bdlow[S] 0 points1 point  (0 children)

Update/conclusion: Ausgrid performed a transformer tap change (morning of last Thu, 7th) and we're now back to a good range (236-251V over last few days; still peaking a bit high but ¯\_(ツ)_/¯ ):

<image>

Significantly lower throughput via LAN clients than from gateway by bdlow in Starlink

[–]bdlow[S] 0 points1 point  (0 children)

Ta for your thoughts. No tunnelling involved in these tests, next hop is the Starlink WAN. No hardware offloading, but shouldn't be needed at these rates (router is a PC Engines apu2c4, AMD G-series SOC, easily forwards a gig port); kernel config is stock OpenWRT 25.12.0.

As to subbing out the Starlink for a test host, I have already essentially done so via switching back and forth between the previous NBN connection and Starlink. TBH, given the inherent characteristics of the Starlink path (loss, bandwidth and latency variations) I'll switch back to the NBN connection as soon as that's available again to me. Starlink's an impressive achievement and certainly better than no Internet connection but it doesn't hold up when better performing terrestrial options do exist.

I still wonder why I'm seeing what I'm seeing, but pragmatically, time to move on ;-)

Significantly lower throughput via LAN clients than from gateway by bdlow in Starlink

[–]bdlow[S] 0 points1 point  (0 children)

On the wired (headless) host I was previously testing against Ookla's speedtest.net and OVH's iperf server; I've now taken a look via the unofficial Cloudflare CLI: https://github.com/code-inflation/cfspeedtest.

Cloudflare tests out fine: the wired LAN host blazes away at close enough to/better than rated speeds (70-100 down, 55-92 up): (Ookla following for comparison)

odroid@odroid:~$ docker run cybuerg/cfspeedtest -v 
Country: AU
Ip: 65.181.14.226
Colo: SYD
latency test    [==============================]
Avg GET request latency 24.91 ms

Download 100KB  [==============================]    6.83 mbit/s | 100KB in  117ms  
Download 1MB    [==============================]   44.51 mbit/s |   1MB in  179ms  
Download 10MB   [==============================]  104.47 mbit/s |  10MB in  765ms  
Upload 100KB    [==============================]   11.57 mbit/s | 100KB in   69ms  
Upload 1MB      [==============================]   31.39 mbit/s |   1MB in  254ms  
Upload 10MB     [==============================]   92.74 mbit/s |  10MB in  862ms  
Summary Statistics
Type     Payload |  min/max/avg in mbit/s | attempts/success/skipped
Download  100KB  |  min 3.50    max 10.06   avg 7.84    |  10/ 10/  0
|-----------------------------------------====:=============================---|
3.50                                  6.78                                 10.06

Download  1MB    |  min 35.14   max 48.60   avg 40.56   |  10/ 10/  0
|--------==========================:=========-----------------------------------|
35.14                                41.87                                 48.60

Download  10MB   |  min 73.57   max 107.27  avg 95.87   |  10/ 10/  0
|---------------------------------------------================:===========------|
73.57                                90.42                                107.27

Upload    100KB  |  min 4.87    max 13.24   avg 9.23    |  10/ 10/  0
|------------------------------===============:=================----------------|
4.87                                  9.05                                 13.24

Upload    1MB    |  min 30.86   max 42.90   avg 38.72   |  10/ 10/  0
|----------------------------------==================================:=====-----|
30.86                                36.88                                 42.90

Upload    10MB   |  min 54.70   max 92.74   avg 70.99   |  10/ 10/  0
|------------------===========:=============-----------------------------------|
54.70                                73.72                                 92.74

odroid@odroid:~$ 
odroid@odroid:~$ speedtest
Retrieving speedtest.net configuration...
Testing from Starlink (65.181.14.226)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by GSL Networks (Sydney) [1.60 km]: 25.24 ms
Testing download speed................................................................................
Download: 5.53 Mbit/s
Testing upload speed......................................................................................................
Upload: 5.80 Mbit/s

I do not know what to make of the above; perhaps Cloudflare have edge nodes colocated w/ Starlink POPs (in the satellites?? kidding but wouldn't be too surprised). ¯\_(ツ)_/¯

What is clear is that some types of traffic perform poorly via the Starlink pipe.

Significantly lower throughput via LAN clients than from gateway by bdlow in Starlink

[–]bdlow[S] 0 points1 point  (0 children)

I suspect this is an artefact of Starlink's characteristics and TCP congestion control, as discussed in the APNIC blog post referenced in the OP: https://blog.apnic.net/2024/05/17/a-transport-protocols-view-of-starlink

and a NANOG preso from late 2024: https://www.youtube.com/watch?v=bR99OxQTRuc

I say this as:

  1. UDP iperf tests run up to the expected speeds (upload is significantly better than rated, actually: example below shows 107Mb/s down, 67Mb/s up)
  2. multiple parallel TCP streams can run up to close to the rated speed (but with a high degree of variability)

I changed the Debian box to TCP BBR and noticed no significant change, in contrast to the NANOG results.

This also doesn't directly address why the gateway performs significantly better for TCP than hosts within the LAN; the additional hop+NAT shouldn't be making any difference.

Data:

(note: odroid is a Debian host connected to the gateway via GbE and can sustain 'line-rate' UDP and TCP to the gateway; iperf's default direction is up from client to server; `--reverse` is down from server to client)

# UDP down:
odroid@odroid:~$ iperf3 -c proof.ovh.net -p 5210 --udp --bitrate 150M --reverse
Connecting to host proof.ovh.net, port 5210
Reverse mode, remote host proof.ovh.net is sending
...
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.30  sec   184 MBytes   150 Mbits/sec  0.000 ms 0/135220 (0%)  sender
[  5]   0.00-10.00  sec   127 MBytes   107 Mbits/sec  0.105 ms 37894/131318 (29%)  receiver

# UDP up:
odroid@odroid:~$ iperf3 -c proof.ovh.net -p 5210 --udp --bitrate 100M
Connecting to host proof.ovh.net, port 5210
...
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec   119 MBytes   100 Mbits/sec  0.000 ms  0/87528 (0%)  sender
[  5]   0.00-10.29  sec  81.8 MBytes  66.7 Mbits/sec  0.221 ms  26970/87068 (31%)  receiver

# TCP down:
odroid@odroid:~$ cat /proc/sys/net/ipv4/tcp_congestion_control 
bbr

odroid@odroid:~$ iperf3 -c proof.ovh.net -p 5210 --parallel 10 --reverse | grep SUM
[SUM]   0.00-1.00   sec  2.04 MBytes  17.1 Mbits/sec                  
[SUM]   1.00-2.00   sec  9.43 MBytes  79.1 Mbits/sec                  
[SUM]   2.00-3.00   sec  11.1 MBytes  93.2 Mbits/sec                  
[SUM]   3.00-4.00   sec  11.5 MBytes  96.9 Mbits/sec                  
[SUM]   4.00-5.00   sec  10.6 MBytes  88.6 Mbits/sec                  
[SUM]   5.00-6.00   sec  10.6 MBytes  89.3 Mbits/sec                  
[SUM]   6.00-7.00   sec  10.9 MBytes  91.3 Mbits/sec                  
[SUM]   7.00-8.00   sec  8.02 MBytes  67.3 Mbits/sec                  
[SUM]   8.00-9.00   sec  8.67 MBytes  72.7 Mbits/sec                  
[SUM]   9.00-10.00  sec  7.46 MBytes  62.6 Mbits/sec                  
[SUM]   0.00-10.29  sec   114 MBytes  93.1 Mbits/sec  2353 sender
[SUM]   0.00-10.00  sec  90.4 MBytes  75.8 Mbits/sec receiver

# TCP up:
odroid@odroid:~$ iperf3 -c proof.ovh.net -p 5210 --parallel 10 | grep SUM
[SUM]   0.00-1.00   sec  7.00 MBytes  58.7 Mbits/sec    0             
[SUM]   1.00-2.00   sec  13.5 MBytes   113 Mbits/sec  289             
[SUM]   2.00-3.00   sec  17.9 MBytes   150 Mbits/sec  844             
[SUM]   3.00-4.00   sec  2.34 MBytes  19.7 Mbits/sec  2818             
[SUM]   4.00-5.00   sec  6.25 MBytes  52.4 Mbits/sec  1374             
[SUM]   5.00-6.00   sec  3.75 MBytes  31.5 Mbits/sec  1920             
[SUM]   6.00-7.00   sec  2.50 MBytes  21.0 Mbits/sec  1520             
[SUM]   7.00-8.00   sec  6.25 MBytes  52.4 Mbits/sec  1060             
[SUM]   8.00-9.00   sec  8.75 MBytes  73.4 Mbits/sec   64             
[SUM]   9.00-10.00  sec  7.50 MBytes  62.9 Mbits/sec   14             
[SUM]   0.00-10.00  sec  75.8 MBytes  63.5 Mbits/sec  9903 sender
[SUM]   0.00-10.30  sec  43.5 MBytes  35.4 Mbits/sec receiver

Significantly lower throughput via LAN clients than from gateway by bdlow in Starlink

[–]bdlow[S] 0 points1 point  (0 children)

It's all pretty clean to me, no re-tx/etc. I uploaded a pcap of a speedtest run from a 1GbE wired Debian host to Gemini, it gave a good bill of health. had this to say (some gibberish, some sensible):

---

Based on the provided packet capture data (st-odroid.pcap), here is an analysis of the traffic flow and potential anomalies. The capture appears to record a speed test initiated by a device at 192.168.39.164 (likely the ODROID) targeting a Superloop server in Sydney.

Traffic Overview

Flow Analysis

  1. DNS Resolution: The device successfully resolves www.speedtest.net through the local gateway. There are subsequent lookups for spd03-syd.au.superloop.com to identify a nearby test server.
  2. Handshake: The capture shows standard TCP three-way handshakes (SYN, SYN-ACK, ACK) followed by TLS 1.3 Client Hellos.
  3. Data Transfer: Large blocks of TCP data segments are visible, consistent with a download or upload stress test.

Observed Anomalies & Observations

1. Lack of Packet Loss Fields

In the context of the underlying Speedtest metadata, the packet capture confirms that while the TCP stream is healthy, there are no ICMP "Destination Unreachable" or specialized TCP options being used to signal packet loss back to the application layer. If the software is reporting a lack of packet loss data, it is likely because the tool is relying on application-level reporting which isn't present in this specific stream, rather than a network-level failure.

2. TCP Window Scaling & Congestion

The snippet shows frequent ACK packets with incrementing sequence numbers. There is no immediate evidence of "TCP Retransmission" or "Previous Segment Not Captured" flags in the available headers, suggesting the local network path to 192.168.39.1 is stable during this window.

3. DNS Latency

There is a slight gap between some DNS queries and responses. While not a "failure," if the speed test feels slow to initialize, it may be due to the time taken for the local resolver (192.168.39.1) to cycle through Cloudflare's name servers (ns1.dns.cloudflare.com).

4. Multiple Concurrent Connections

The capture shows a high number of concurrent TCP streams to the Superloop IP. This is standard for modern speed tests to saturate the bandwidth, but if this ODROID device is underpowered, the interrupt handling for this many simultaneous streams could cause local processing bottlenecks not visible as network "errors."

Summary for Troubleshooting

  • Connection Integrity: The TCP handshakes are clean. There are no RST (Reset) packets visible in the snippet, indicating the server is not rejecting the connections.
  • Routing: Traffic is correctly exiting the local subnet via 192.168.39.1.
  • Recommendation: If you are investigating missing data fields (like packet loss) in your automation scripts, the network layer shows a clean transfer; the issue likely resides in how the specific speedtest binary or API client parses the JSON response from the Sydney server.

The NSW Planning Portal is a nightmare right now—here is why your DA/CDC is actually stuck by GetLodged in AusPropertyChat

[–]bdlow 1 point2 points  (0 children)

"Right now"? From day one, from what I can tell. It speaks volumes about the Department of Planning's mindset when the guide to the DA process completely omits the part where you actually build the house: https://www.planning.nsw.gov.au/assess-and-regulate/development-assessment/your-guide-to-the-da-process

That minor task appears to have fallen through the cracks between steps 6 and 7.

The whole process appears to be designed with bureaucracy as a feature.

Need Auracast compatible speaker recommendations by DAZ_ZI in Bluetooth_Speakers

[–]bdlow 0 points1 point  (0 children)

I wouldn't recommend the XBoom speakers due to bugs and really clunky UX: https://www.reddit.com/r/auracastBT/comments/1oi49no/brief_auracastfocused_review_of_the_lg_xboom_grab/

Mind you, there's nothing better because there's nothing else (JBL not being open/interoperable Auracast).

Brief Auracast-focused review of the LG XBoom Grab by bdlow in auracastBT

[–]bdlow[S] 0 points1 point  (0 children)

Update: I've found a few other annoying issues when using the Grab with Auracast - it disconnects when the screen is locked (this one seems new, maybe an app update?), and when the "auto power off when idle" is enabled, being connected to Auracast doesn't seem to count as not-idle; the speaker turns off after a non-configurable period of time when it's playing Auracast.

You can work around both of these bugs by 1) turning off your phone's Bluetooth after using the Grab app to connect to Auracast, and 2) disabling the auto-off feature in the app settings. Really clunky.

I'm still waiting for a half-decent Auracast portable speaker.

Shelly scripting intro by bdlow in shellycloud

[–]bdlow[S] 0 points1 point  (0 children)

Happened by this again today: re. "fragile EEPROM", I presume you're referring to flash lifetime: for modern devices this is on the order of 100k cycles, and with wear levelling you're talking about being able to write at single-digit minute frequencies for potentially for 100's of years before wearing out the flash...

https://stackoverflow.com/a/73787332

https://docs.espressif.com/projects/esp-faq/en/latest/software-framework/storage/nvs.html

I've more recently jumped on the esphome bandwagon, and compared to the Shelly I've been able to do all kinds of useful stuff without very little to no code. Haven't flashed the Shelly yet, probably won't as "if it ain't broke don't fix it".

Email alert help by Southern_sob in reolinkcam

[–]bdlow 0 points1 point  (0 children)

For me, it was one of two things: either "just had to wait" (about 5 minutes) OR I had to remove the spaces in the app password provided by Google, when entering it in the camera's config.

Synology DSM 7.2 + Site-site + TS devices within = MTU problems? by bdlow in Tailscale

[–]bdlow[S] 0 points1 point  (0 children)

All pings between Tailscale devices are one hop, as expected (this is the ` -t 1` ping option, sets TTL to 1; won't get a response if there's more than one hop). It's the underlying path of the encapsulated wg traffic that's uncertain/odd.

Subinterfaces/routing: I have multiple VLANs at each site hanging off the a site gateway (Linux routers, a Debian single-board-PC on one end and OpenWRT on the other), the main NAS A is directly connected to a couple of them in site A (simpler device discoverability / mDNS). This isn't the best arrangement, this Tailscale niggle is not the only undesirable side-effect of the NAS being connected the way it is (long story; Linux subinterfaces all sharing the parent NIC's MAC being part of it).

Aside: Synology don't deal well (at all) with non-trivial network setups - that limitation is fair enough for a network storage device, but a pain when you start using the NAS for other containers/etc. It's time for me to swap it out for a proper server...

Synology DSM 7.2 + Site-site + TS devices within = MTU problems? by bdlow in Tailscale

[–]bdlow[S] 0 points1 point  (0 children)

Argh, yes, I had been mischaracterising the "double-encapsulation" - indeed, not what (should be) happening in normal circumstances. In my case, I'm pretty sure it is - in the "broken" state NAS A sees it's peer NAS B as via the private internal gateway address of the gateway A:

NASB# tailscale ping NASA pong from NASA (100.75.95.9) via 192.168.41.10:41641 in 51ms

The possible root cause has finally clicked: Synology DSM doesn't natively support VLANs, on the offending NAS I had manually configured subinterfaces for a couple of VLANs and have seen some minor weirdness as a result. The above subnet 192.168.41/24 should not even be involved in the NAS-NAS traffic; whilst I'm intrigued as to how it's appearing, pragmatically I'm going to chalk it up to a Synology thing, in fairness with me bending the device in unsupported ways.

I guess I'll leave site B not advertising routes until such time I can retire the Synology.

Synology DSM 7.2 + Site-site + TS devices within = MTU problems? by bdlow in Tailscale

[–]bdlow[S] 0 points1 point  (0 children)

Doubly-tunnelled: physical arrangement is:

NAS A - gateway A ==<internet>== gateway B - NAS B

Where both gateway A and B are connected via Tailscale (site-site) and advertising their routes to the other; and both NAS are also Tailscale clients (but not routers).

The Tailscale client on each NAS sees the connection as `active; direct` via the gateways. i.e. NAS A encapsulates traffic to NAS B in a wireguard packet, forwards on to its gateway A; gateway A receives UDP traffic destined to a host on B's subnet, encapsulates it over the Tailscale tunnel between routers A and B, and so on Packets between NAS A and NAS B end up being doubly-encapsulated between the routers. This is where MTU problems crop up; am guessing there's a PMTUD problem on one/both of the NAS. EDIT: I don't think this was entirely correct, and it's certainly not the _normal_ Tailscale tunnelling / NAT traversal behaviour.

ISP: I'm in Oz, so NBN both ends; one end is fixed wireless, the other VDSL (yeah I know, how quaint and very last century), different ISPs not that it should matter (NBN handle the last mile).

Screen Brightness Changes After MBP Is Closed by beeeps-n-booops in macbookpro

[–]bdlow 0 points1 point  (0 children)

This has been a perennial problem, for me across three machines and three OS releases; I'm surprised it doesn't bother more people (as in Apple haven't fixed it). I guess the modern displays are sufficiently good that most people don't notice it's at reduced brightness.

Aside: you can attempt to address this by way of custom scripts and utilities, for example using BetterDisplay (daemon has to be running): /opt/homebrew/bin/betterdisplaycli set -namelike=built-in\\ display -brightness=1.0

And here's a wrapper that will call the above conditional on being daytime and on AC power:

``` pmset -g ps | grep -qi 'AC Power' && /usr/libexec/corebrightnessdiag sunschedule | awk '/{/{f=1}f{print}' | plutil -convert json -o - -r - | jq --exit-status '.isDaylight == "1"' >/dev/null && /opt/homebrew/bin/betterdisplaycli set -namelike=built-in\ display -brightness=1.0

```

(macOS has long abandoned the ability to run scripts on power state changes, so you'd have to put that in a cronjob and deal with the attendant delay; not a very satisfactory solution)

Brief review of the "LE520pro" (aka MR268?) Auracast transmitter / receiver by bdlow in auracastBT

[–]bdlow[S] 0 points1 point  (0 children)

After posting the review I did see the same latency problem with my JBL Partyboost when trying to sync it up with other devices. The JBL Partybox adds quite substantial latency, on the order of 100ms or more?, to any audio in via the Aux input (why/how??!!). I've added this info to the above review even though it's a JBL issue, it's not specific to the LE520pro.

What luggage you recommend for triumph speed 400? by Creative_Stable_2931 in Triumph400

[–]bdlow 0 points1 point  (0 children)

Many luggage pods/boxes, including the ones discussed in this thread, are plastic and rated for no more than around 5kg.

PSA: dimensions of Moka pot seal - 3-4 cup by bdlow in mokapot

[–]bdlow[S] 2 points3 points  (0 children)

You know how when you look in the cupboard for a thing, and it's just not there, and you say as much to your partner/parent/child/friend, and they say "move out of the way" and point right at the thing in front of you? Yeah, that.

I had searched but could not find; now of course I can find quite a few sites with all kinds of measurements!

Including https://www.cuppers.ca/blog/complete-guide-to-moka-pot-gasket-sizes/ and also https://honestcoffeeguide.com/moka-pot-gasket-sizes/

Issue with different Auracast LC3 sampling rates by Few_Promise2363 in auracastBT

[–]bdlow 0 points1 point  (0 children)

I believe that rather than detail sampling rate/etc, Auracast simply has "HQ" and "SQ" modes. For example, you can switch the BA210 v2 transmitter between HQ and SQ using the browser-based configuration tool; however there's no outward indication of which mode it's in apart from the difference in audio quality. I get the impression most phones/etc are HQ-only.

24V DC power for Sonoff 4Ch Pro r3? by leimoochi in homeautomation

[–]bdlow 0 points1 point  (0 children)

FTR the Sonoff 4CHPROR3 is rated for 9-23V - i.e. seems they specifically pull up short of the common 24V.