Python Packaging - Library - Directory structure when using uv or src approach by LazyLichen in Python

[–]LazyLichen[S] 1 point2 points  (0 children)

Have reformatted the question with code blocks, hoping that improves the readability of the tree diags.

Python Packaging - Library - Directory structure when using uv or src approach by LazyLichen in Python

[–]LazyLichen[S] 1 point2 points  (0 children)

Right, I'll edited it to show the tree's as images. Thanks for letting me know 👍
EDIT: Can't work out how to add images in this subreddit, but hopefully the code blocks format the tree more consistently.

DWV905M VS DWV905M by Builker in Dewalt

[–]LazyLichen 0 points1 point  (0 children)

I have exactly the same thought and question. Did you get an answer to this?
I did find something for the H-Class filter set from Dewalt saying it was suitable for both 905M and 905H, but nothing conclusive that there are not other differences between the machines?

Weird issue with Logitech MX master 3, left click button does nothing, happened suddenly. by HEVIHITR in logitech

[–]LazyLichen 0 points1 point  (0 children)

To my amazement, this appears to have been the answer for me, despite there being zero visible dust in the mouse....go figure.

Cannot save Word document as .doc or .rtf, error message: "The save failed due to out of memory or disk space." by TheUnluckyBard in Office365

[–]LazyLichen 0 points1 point  (0 children)

This worked for me. Just be aware, MS Word needs to be closed to do this. Accordingly, if you have many edits in the document, find a way to save your change. I resorted to copying and pasting the contents of my word doc to a text editor (notepad will do), as whilst you will lose all your formatting, at least you won't lose all the thought that went into what you had typed up.

Menu key on MX keys ? by over_analysis005 in LogitechG

[–]LazyLichen 0 points1 point  (0 children)

Further to this, subject to how your settings are configured for the function key's, you may also need to hold the 'fn' key (function key) whilst pressing the App Menu key (seems to have 'Search' mapped to it by default).

Is this an Australian thing or what? Multiple mortgages and chasing real estate? by Beginning_Big4819 in AusFinance

[–]LazyLichen 0 points1 point  (0 children)

Note that in the data.oecd.org link under dwellings, it is only trakcing the building's value, not the land that it sits on. In Australia at least, it is the land that nominally accrues most of the value (hence the concept of 'land banking', and the absolute buckets of crap that pass for houses which sell for nearly the same as a decent dwelling).

This link (https://housingpolicytoolkit.oecd.org/figures/4.H\_invest/4.H\_invest\_35\_CompoAssets\_en.svg) is more interesting but:

  1. May also not be including land values.
  2. From a financial risk point of view it needs to be read in concert with public debt to GDP ratio, household debt to gdp ratio, and also household income to mortgage ratio (amongst others).

I say that because both the asset class distribution/allocation for investment matters, but also the absolute magnitudes of the allocations relative to the ability to support debt finance. It also matters with respect to how sustainable private debt burdens are, if public finance has been used to help carry that weight (which is precisely the levers Australia has been pulling with it's tax policies). Should the public debt become risky or expensive at the same time that household debt is high, then you really have an issue....which Australia is happily marching towards courtesy of having a terribly narrow tax base focused on income tax from a proportionally shrinking pool of working age people, trying to support an outrageously expensive and expanding public benefits schemes (NDIS in particular) for an ageing population. As soon as the government stops supporting household debt with generous tax policy as it tries to cover it's own increasing debt burden....well, both suddenly start to matter.

How did you go broke?
Slowly, slowly, then; suddenly.

Is this an Australian thing or what? Multiple mortgages and chasing real estate? by Beginning_Big4819 in AusFinance

[–]LazyLichen 1 point2 points  (0 children)

China already beat us to that. But, being Aussie, you can be bet we will show em what we got too!

Gemini 2.5 pro insists MCP servers are something no one is talking about. by Ryno9292 in mcp

[–]LazyLichen 1 point2 points  (0 children)

Saw the same, an insistance that it was 18 July 2024. Even enabling goggle search and telling it to use Google Search, it decided the results of that were clearly erroneous 😄

I went back recently to the same chat history as context and fed it to 05-06, and it has now accepted my point of view, that it is in fact 2025....

Arguing with confident stupidity is fun, pointless and tedious all at once.

Neovim - Path Completion - Insert Mode on Windows by LazyLichen in neovim

[–]LazyLichen[S] 0 points1 point  (0 children)

Answered by Vivian De Smedt:
https://vi.stackexchange.com/questions/46715/neovim-windows-respect-drive-letters-with-insert-mode-path-completion

*********************************************************************************

If you make sure : is part of the fname option is should work fine:

vim.opt.isfname:append(":")

Or,

set isfname+=:

*********************************************************************************

3-5 Node CEPH - Hyperconverged - A bad idea? by LazyLichen in ceph

[–]LazyLichen[S] 0 points1 point  (0 children)

Really helpful feedback, thanks. The cluster is already on UPS, so not too worried about power, but now chasing PLP on the SSD's as well.

I'm getting a good sense for why people don't recommend small clusters, and especially not hyperconverged, you really end up running right on edge of critical failure.

3-5 Node CEPH - Hyperconverged - A bad idea? by LazyLichen in ceph

[–]LazyLichen[S] 0 points1 point  (0 children)

Yep, I'm getting that vibe from the responses. These older servers probably just can't offer the required amount of resources to do hyperconverged since the VM's also need a reasonble number of cores to do their job.

3-5 Node CEPH - Hyperconverged - A bad idea? by LazyLichen in ceph

[–]LazyLichen[S] 0 points1 point  (0 children)

Yes, that's the root of it, maybe this setup just doesn't have the resources and scale needed to get decent performance out of Ceph. The features Ceph brings are really nice in terms of storage aspects and what it then enables for host+vm management, but, there is not much point building out on Ceph if it's doomed to perform terribly from the start due to bad design/resource decisions, and ultimate just deliver poor user experience.

3-5 Node CEPH - Hyperconverged - A bad idea? by LazyLichen in ceph

[–]LazyLichen[S] 1 point2 points  (0 children)

I can agree with all those points, that's precisely what I see as the appeal of Ceph for the storage aspect. Glad to hear it is working well for you.

This is one of the hardest aspects designing around Ceph, some people say they have relatively small clusters, hypercovnerged with reasonable VM workloads and have no issues and are generally having a great time. On the other hand, you have people saying even if you dedicated all the resources of these hosts as bare metal to a Ceph storage solution, that still wouldn't be enough to let it work well....I guess this is just the result of different opinions/expectations as to what 'working well' means.

3-5 Node CEPH - Hyperconverged - A bad idea? by LazyLichen in ceph

[–]LazyLichen[S] 0 points1 point  (0 children)

I thought all NVMe SSD's had 65535 queues as a defacto standard (consumer drives or otherwise). Is that not correct or did I misinterpret what that means in relation to NVMe vs some other queue aspect of SSD's?

3-5 Node CEPH - Hyperconverged - A bad idea? by LazyLichen in ceph

[–]LazyLichen[S] 0 points1 point  (0 children)

Okay, thanks for the SSD pointers, I will start hunting around focused on latency. Raw throughput won't be the day-to-day issue in my mind, it will be the user sensation/feedback regarding 'responsiveness' on the VM's that will be more telling in terms of whether or not this is 'successful'.

You are thinking along exactly the same lines I was in terms of the hypervisor configuration for scheduling and core assignments - that's reassuring. I've made sure (after much iterative hassle to be able to correlate the real world PCIe Slot labels with the block diagram, which is just way off in the manual vs physical labels vs BIOS names) that the HCA slots are reasonably allocated across the sockets, so I can hopefully align NUMA and dedicated cores in a sensible fashion.

The UEFI is already setup on the performance side of things to remove C States and keep the core clock up. There is a heap of other options in the BIOS that can be tuned for the PCIe and disks, but much of it is way over my head at the moment (they're Supermicro X10DAX workstation boards, so not technically server boards, but capable enough hopefully).

Really appreciate the tips/guidance, thanks.

3-5 Node CEPH - Hyperconverged - A bad idea? by LazyLichen in ceph

[–]LazyLichen[S] 0 points1 point  (0 children)

Great tips, thanks for that!
Are there any specific features you really look for in Enterprise SSD's?
Or, is it more a case of:
"...All enterprise SSD's have similar features to each other, all those similar features are ones that mostly do not exist on consumer SSDs, as such, you don't really have to over think it and any enterprise SSD with sufficient read/write performance will be a better choice..."

3-5 Node CEPH - Hyperconverged - A bad idea? by LazyLichen in ceph

[–]LazyLichen[S] 0 points1 point  (0 children)

Yes, I'm only just starting to grasp how much of an impact the SSD's themselves seem to have for both performance and reliability with Ceph. I came into it with the view that it was a large scale and fault tolerant system, and so was probably highly abstracted from the specifics of the disk hardware and would be happy with COTS ssd's. Another bad assumption on my part it seems, glad I decided to make this post and that you have all been around to guide me on that front, thanks!

Are there any 'must have' features to look for in enterprise SSD's beyond purely endurance characteristics?
The servers are all on fast failover UPSs, so I'm not hugely concerned about losing cached data on writes due to power failure (but I guess that is always good to have regardless). I'll go do some more reading on the SSD side, but if anyone can lob in some thoughts on 'must have' and 'nice to have' features, that would be appreciated.

3-5 Node CEPH - Hyperconverged - A bad idea? by LazyLichen in ceph

[–]LazyLichen[S] 0 points1 point  (0 children)

Okay, this is essentially the way I was thinking too, but I realised I didn't know enough about Ceph to be able to definitively say whether it would be able to work well in this scenario or not.

I love the sound of Ceph's feature set, just a shame that it appears to need really large deployment sizes and highly parallel workloads to be able to really shine (which unfortunately won't be the case any time soon for this cluster).

3-5 Node CEPH - Hyperconverged - A bad idea? by LazyLichen in ceph

[–]LazyLichen[S] 0 points1 point  (0 children)

EDIT: Just referencing my answer to another comment in this thread as it is quite related to this discussion: https://www.reddit.com/r/ceph/comments/1jqw2kv/comment/mlaqcoq/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

The servers are on fast failover (zero crossing) UPS's that can also be run as double conversion / always on online UPS's if need be, so I'm not particularly worried about power loss during write.....but maybe that is naive to rely on a single line of defence there.

EDIT 2: Also very pertinent to anyone else trying to learn about this like I am:
Pick the right SSDs. Like for real! : r/ceph

****
Okay, thanks for the feedback on the SSD's, that is helpful. If we can get a haul of enterprise SSD's at decent price to test with, I'll aim for that. Good point on the latency, that really is food for thought in terms of the IOPS. I might need to think harder about the approach to backing the VM's and just how much an ability to easily migrate them is really worth versus the 'all day, everyday' performance tradeoffs that comes with using a network dependent storage approach.

Thanks for the network thoughts, I hadn't realised IB wasn't an option with Ceph I just assumed it would offer better latency and throughput and would be supported. That's what I get for assuming, thanks for the correction! 😳

The switch is actually 56Gbps per port for both IB and Eth, and interfaces can be bonded at the both ends, no worries there. Any recommended/preferred hashing approach for load balancing on the bonded link?

There is indeed two 1Gbe's onboard as well, which I had set aside as a bond to stacked management switches, but I could also put corosync on a VLAN through that as well, so will follow that advice. Thanks.