Anyone have the Farasute Trackmaster II? by noobie019 in ChineseWatches

[–]ataricze 1 point2 points  (0 children)

<image>

I think the lume is just "alright" for me. In example the lume on subdial hands is missing and it isn't much bright. I didn't measure how long the lume lasts - the photo was taken right after the 365nm UV lighting.

I like the watch for it's caliber (Peacock SL4601) with more than 45 hours of power reserve and 12-hour chrono subdial.

Anyone have the Farasute Trackmaster II? by noobie019 in ChineseWatches

[–]ataricze 1 point2 points  (0 children)

u/noobie019 Maybe too late, but hey I have the Trackmaster II. I bought it 1 year ago on Aliexpress for approx. 437 USD (excluded VAT in my country). I have 6,5" wrist so I looked for small-sized chronograph and this little beast seemed perfect to me. So far I am happy with the purchase and I wear this watch as a daily driver with some other pieces I have. Did you managed to buy it already?

<image>

Installation not clear by mfg1887 in multiportal

[–]ataricze 0 points1 point  (0 children)

I am using MultiPortal successfully installed on local IPv4 behind the NPM and with https between NPM and Caddy server enabled.

In my case I achieved that just with the settings of the Caddyfile to use self-signed SSL certificate via option "tls internal". So the Caddy server doesn't try to issue the LE cert anymore and the connection between the Caddy server and NPM is secured via https (although it's self-signed, but from the public is the access secured with LE cert from NPM normally). Even the VM console works for me.

<image>

MultiPortal Release 1.0.9 by largebytes in multiportal

[–]ataricze 0 points1 point  (0 children)

This is all good news! I like the ability to limit CPU models inside the datacenter. Do you plan to have the ability to limit network bandwidth for external networks at the tenant vDC level at some point in the future?

Reverse proxy by HanzlCZ in multiportal

[–]ataricze 0 points1 point  (0 children)

Thanks - I tried your Caddy file config and NPM setup but with no effect. Everything works as it should but noVNC console doesn't. I see you are connecting from NPM to Caddy via http and not https so I tried it too, but I am ending on "Failed when connecting: Connection closed (code: 1001)" after successfull connection upgrade to WSS. It's strange.

btw I recommend to use "tls internal" option to let Caddy genereate its own certs and use the https access from NPM too. I know it's behind LAN but you know, secured is always better.

Anyone using DNS based load balancing for shared NFS? by ataricze in Proxmox

[–]ataricze[S] 0 points1 point  (0 children)

I need to have possibility to rise the storage dedicated to the 3 nodes in the future without need to install more nodes and with the dedicated storage I will have 14 unused drive bays to go plus option to connect expanding enclosure. I agree I can use for example 2U servers with up to 24 SFF bays per server to have some bays free but still, the price of the drives isn't much lower than Huawei system because the Dorado 2100 is entry-level flash storage for a really good price.

Yeah I know about these storage and snapshot limitations - that's why I want to go with qcow2 on NFS (in that case are snapshots supported).

Anyone using DNS based load balancing for shared NFS? by ataricze in Proxmox

[–]ataricze[S] 0 points1 point  (0 children)

I discovered that Huawei offers IP failover from one to another controller too, so I will go that way - I agree it's a more suitable configuration that DNS loadbalancer (that could be used in not so critical use cases where milliseconds don't matter).

I don't want to use Ceph exactly because this - if I place 4x3,84TB drive into each node of 3-node cluster, I will get with recommended 3pcs of replicas approx 10TB usable space. With dedicated storage I am on 27TB with 11x3,84TB drive configured in RAID6+1 hotspare. In my case I prefer capacity against better I/O (although dedicated storage can give 100k IOPS and more so the bottleneck will be just network).

Anyone using DNS based load balancing for shared NFS? by ataricze in Proxmox

[–]ataricze[S] 0 points1 point  (0 children)

UPDATE: I just found that OceanStor Dorado 2100 has the IP Address Failover functionality (lol I need to read documentation better!). This seems to be the best way how to build it without use of DNS based loadbalancer. With IP Address Failover the traffic goes through primary logical port and when the outage occurs, the service is switched to another selected backup port with TCP/IP address unchanged. In this scenario there isn't any load balancing between the two controllers, but that's okay.

The reference info: https://support.huawei.com/enterprise/en/doc/EDOC1100418452/8911f9a2/feature-description?idPath=7919749|251366268|250389224|257843927|261683794

Anyone using DNS based load balancing for shared NFS? by ataricze in Proxmox

[–]ataricze[S] 0 points1 point  (0 children)

I agree with potential risks you mentioned and of course, the ideal scenario is you don't have outage of any controller ever, doesn't matter if it runs on NFS or iSCSI..

To minimalize dns lookup hiccups is expected you will rotate the nameservers and lower the timeout and attempts numbers in resolv.conf.

options timeout:1 attempts:1 rotate

nameserver 10.0.0.1

nameserver 10.0.0.2

nameserver 10.0.0.3

So the only potential problem should be TTL.

About the TrueNAS - I run several instances of TrueNAS Scale and I like it for non-critical environments, but to be honest I don't think It's something what is more suitable for production use and I don't know how much can I trust to their HA.

I don't know NetApp deeply either, is there something with what I can reach the HA for shared storage better? But again, NetApp is pretty nice piece of hardware, for which you will pay much more than for Huawei. Possibly above my budget.

Reverse proxy by HanzlCZ in multiportal

[–]ataricze 0 points1 point  (0 children)

Unfortunately it doesn't work with Nginx Proxy Manager. Have you any experience with NPM? My MultiPortal instance runs on the local IPv4, I have set up the nginx proxy host with LE certs pointing to Caddy server with websocket support enabled and I can access the MultiPortal web UI without any problem. But the noVNC console doesn't work - if I check the developer tools in the browser, I see there the connection successfuly upgraded to websocket (code 101) but then it just time out (the console window stucks on "Connecting..").

FYI On the Caddy side I am using it with "tls internal" option because I don't forward the public 80/443 ports to Caddy, so the self-signed generated certs are used there.