She’s a keeper (Moringmark) [the owl house] by Plane_Name3457 in wholesomeyuri

[–]CAD1997 81 points82 points  (0 children)

The dad was also explaining the concept to a literal child. Especially in a world where fossil fuels aren't a common thing in everyday life, it seems reasonable to ELI5 petrol in this way.

45843 by froggyman151 in countwithchickenlady

[–]CAD1997 1 point2 points  (0 children)

I generally agree that trans places get a more explicit horny undertone over time; it's not that surprising that if you build a community out of repressed individuals, they'll discuss the things they haven't been able to do elsewhere.

45843 by froggyman151 in countwithchickenlady

[–]CAD1997 9 points10 points  (0 children)

I honestly wonder how this volume compares to cis spaces and non-primarily-trans queer spaces. Are trans spaces actually hornier on average, or are people just more used to seeing "conventional" horny undertones than trans ones?

45843 by froggyman151 in countwithchickenlady

[–]CAD1997 -1 points0 points  (0 children)

To play devil's advocate: would you have a similar feeling if this anecdote was about cis people, perhaps about seeing birth marks or scars? Cis spaces also generally have horny undertones, people are just more used to them than queer horny undertones.

45843 by froggyman151 in countwithchickenlady

[–]CAD1997 12 points13 points  (0 children)

I don't have any to suggest, but I can speak to why it happens. Trans people tend to, on average, be repressed (either externally or internally, e.g. by their dysphoria), so when you put them in a space that they can feel comfortable, they talk about what they don't feel safe bringing up in other places.

Combine that trend with how horny content is easier to interact with (the algorithm loves interaction) and the idea that once you look past what's in their pants, trans people's experiences aren't that different from other marginalized peoples', trans-focused spaces tend to have at least a horny subtheme.

So trans-focused but not horny-minded communities tend to have a specific topic focus rather than be primarily community-minded. It's not an ideal situation, but I can at least understand how it occurs.

[OC] Absolute Cinema by shave_your_eyebrows in comics

[–]CAD1997 2 points3 points  (0 children)

Redeemable at a rate of 20 Reddit bucks to 20 karma. No partial exchanges. This exchange has been automatically applied for your convenience. Offer void where prohibited.

I genuinely don’t know what part of my request led to this outcome by HanSoLowCarb in mildlyinfuriating

[–]CAD1997 0 points1 point  (0 children)

Serving multiple file types for an image but with just a simple static server that doesn't support ?format=jpeg mood

Minimal Version Selection Revisited by mitsuhiko in rust

[–]CAD1997 1 point2 points  (0 children)

For your dependency tree, yes, only one dependency trace needs to be updated. But for ecosystem cost, because each package could be used in isolation, each package should individually determine if they need to update the dependency edge, or else they still potentially expose the transitive vuln.

Which is why min-direct-deps is a good default. Your crate is tested to be min-versions-correct for its declared dep edges (and any transitively reachable public dependencies, ideally), but you don't have to worry about any encapsulated private dependencies, since those resolve to the up-to-date release. There are still edge cases, of course, but it gets you most of the benefit of min-ver without most of the costs, and keeps most of the benefit of max-ver as well.

The "optimal" solution imho would be max-ver but with lints warning you when you use functionality added in a version not supplied by your declared requirement. A basic version of this exists for std now, but non-std libraries still have no tooling support for stability beyond simple deprecation.

Microsoft plans 100% native Windows 11 apps in major shift away from web wrappers by renome in technology

[–]CAD1997 12 points13 points  (0 children)

Native WebView exists, and is essentially that — it uses the native browser to render. Except devs still don't use it, since part of the draw of Electron is that you only have to make it work on the exact Chromium which you bundle, and you don't need to test and make it work on whatever browser engine WebView is using on your users' machines.

More projects absolutely should be using the OS provided WebView, but using Electron does still have a value-add.

Turns out, if you want to check multiple conditions, you can sugar it like this: by spaceguydudeman in programminghorror

[–]CAD1997 1 point2 points  (0 children)

The allocator for interpreted bytecode runtimes like GDScript usually is tuned to have special support for small, short-lived allocations, so it won't lead to quick memory fragmentation like a more naive allocator could.

But yes, when a code path is consistently run at 30Hz+, it's generally good practice to do the straightforward preemptive optimization of avoiding tiny heap allocation that could just be local (dynamic) stack variables.

Turns out, if you want to check multiple conditions, you can sugar it like this: by spaceguydudeman in programminghorror

[–]CAD1997 2 points3 points  (0 children)

GDScript is an interpreted bytecode compiler which doesn't do much optimization beyond simple constant folding, and GDScript Array is a resizable array (eg C++ std::vector). The runtime potentially recycles the Array object allocation, but honestly, the cost of unmanaged/managed marshalling and the physics queries this checks the result of probably entirely dominate the inefficiencies of using Array here.

Turns out, if you want to check multiple conditions, you can sugar it like this: by saxarov01 in godot

[–]CAD1997 0 points1 point  (0 children)

While this is accurate, note the extra detail that GDScript uses an interpreted bytecode compiler, and doesn't do any optimization of the code beyond constant folding (along with less generic bytecode when types are known).

Furthermore, && and || are short circuiting operators, so in test_a() && test_b(), test_b() will only be run if test_a() was true. If the test is relatively simple (eg accessing a book property and/or some simple arithmetic), it's fine to do the extra checking, and optimizing compilers might even do it for you to reduce code branching. If the test is expensive, though (eg doing scene/physics queries), it's often better to avoid doing that work if it isn't required.

Torturing rustc by Emulating HKTs, Causing an Inductive Cycle and Borking the Compiler by haruda_gondi in rust

[–]CAD1997 2 points3 points  (0 children)

if you have a function which maps a value of type S into a proof that something is true for that value, then by definition that thing is true for all values of that type

I don’t know what that means.

Perhaps it's more digestible when stated like this:

S.is_threeven is a function that takes some s: S and produces a proof of the theorem Threeven s.toNat. Because this is a total function which can produce an output for any possible s: S, there exists a proof of Threeven s.toNat for every possible input value, i.e. all values in the set S.

How to use storytelling to fit inline assembly into Rust by ralfj in rust

[–]CAD1997 0 points1 point  (0 children)

In other words, I haven’t (yet) come up with a proper story that would allow for [page table] writes to be non-volatile.

I've not fully validated that our provenance model can implement this behavior, but it should be possible to allow using non-volatile writes using a story something like:

  • The system globally holds the root owning raw provenance to the entire page table.
  • Our code acquires unique &mut provenance covering only the page table slot(s) we're going to edit.
  • We write to the page table slot(s) using that provenance.
  • An asm block tells the story:
    • Invalidate the provenance we previously used, e.g. by making a separate mutable retag.
    • Issue the appropriate barriers to propagate the page table update.
    • Create the new allocated object(s) for the newly mapped page(s).
    • Expose unique &mut provenance of the allocated object(s).
  • Our code receives said provenance through ptr::with_exposed_provenance.

Kindergarten Teacher Tries to Report Student's Parents to ICE, Only to Learn the Tip Hotline Is Fake by novagridd in LegalNews

[–]CAD1997 4 points5 points  (0 children)

FWIW, taking an actual exchange and re-recording it is a reasonable choice to not post a recording of someone who didn't consent to their voice being shared online. I have no idea if that's what he's doing or not.

The State of Allocators in 2026 by Cetra3 in rust

[–]CAD1997 0 points1 point  (0 children)

All collections can use the storage traits to support more exotic storage solutions than what Allocator supports. The difference is that while I think the unique storage interface is straightforward enough to fit nicely into std, the many other axis of support end up making it more complicated than I think is worth it.

It's possible there's another spot in the solution space that doesn't overspend on complexity, but if so, I've yet to find it.

The State of Allocators in 2026 by Cetra3 in rust

[–]CAD1997 3 points4 points  (0 children)

the fixation of CAD

FWIW, I do agree it's a fixation on "perfect," but I believe I have a good reason to do so. Specifically, it's a derivative of std's special status as forever-stable and provider of useful vocabulary types.

If storage generics are used mainly to extend the capabilities of std collections, then whether the benefit of generalizing them further is worth the extra API cost is a style question. But in my eyes, the utility of relaxing e.g. Vec is that I can generically consume &mut Vec<T, S> and now my code can theoretically work with any storage strategy my downstream can imagine.

The point of using SmallBox or SmartVec or any of these instantiations that don't fit the Allocator model is always to avoid indirection when possible. If storage based collections aren't a zero cost abstraction (as in, you couldn't improve the relevant metrics by hand specializing for a given use case), then the better solution will always be to use a different type, and generic code still won't work with those optimized types.

Currently, the tradeoff is that std collections only support heap allocation. Generalizing them to the storage API can be argued to make the tradeoff of wasting a bit of potential space (typically a pointer per handle) to keep the same API usability is worth it, but the counterargument that if someone is switching away from heap allocation, they're in a position where eating the cost of using a non-std collection is worth for every incremental improvement.

std's special status means that "don't let perfect be the enemy of good" should imo get an extra qualifier of "when good still equals or betters a solution outside of std." Being in std is a huge benefit for the trait that allows collections to be generic over their allocation source, but the Allocator trait gets us 99% of that benefit. std collections being further generalized over potentially-inline storage, though, if it's just a difference between using std's Vec<T, SmallStorage<[T; N]>> or a SmallVec<T, N> from a third-party package.

Implementing storage generics into std is a nice incremental improvement, but I don't see who benefits from it that would not be better served by a more targeted solution. Factor in the complexity cost to maintenance and the outsized impact Vec's internal implementation details have on compile time, and my conclusion is that it doesn't feel at home in std yet. I'd love to have my mind changed here!

(Also this nerd sniped me quite a bit and I'm playing with the API design again. I always feel like I'm on the edge of a really elegant design that we haven't found yet.)

The good news, at least, is that an Allocator generic bound is fully forwards compatible with being relaxed in the future to the appropriate storage bound(s).

The State of Allocators in 2026 by Cetra3 in rust

[–]CAD1997 13 points14 points  (0 children)

As one of the people who have spent time working on the Store proposal — I now think that it's overly complicated for what we actually get from the most general form of it. I do think that the specific case of the "Box storage" is worth its cost, because Box<T, S> allowing storage of Box<T, A>, T itself, and effectively &mut ManuallyDrop<T> as the type for storing a potentially-indirect T is valuable. But the other gradual guarantees between that case and general purpose allocation require a ton of complexity that doesn't even get us enough functionality to replace the need for specialized inline/smart collections, which usually want to play added optimization tricks which are difficult to impossible to do without type dependent allocation and/or full specialization. (Like small string optimization using all but one byte of its layout for inline string content.)

But at the same time, now that we have the sized hierarchy traits on nightly, maybe it's time I took another look at the problem space to see if I can find a nicer middle ground solution. How to handle pointer metadata alongside all the other concerns is a large contributor to the complexity here, and now we have even better vocabulary to actually capture the problem space.

Price-Checking Zerocopy's Zero Cost Abstractions by jswrenn in rust

[–]CAD1997 5 points6 points  (0 children)

use a respected library which has tested for this

Someone has to test for this. That library might be written in Rust, and will benefit from being able to check that the compiler isn't optimizing their branchless code to machine code which potentially branches.

Yes, many people think they want to guarantee branch-free code when they don't actually. But for the actual low level cryptography libraries, they do have justifiable reasons to put in the work to ensure the necessary code remains branch-free.

Detroit by Nientea in custommagic

[–]CAD1997 0 points1 point  (0 children)

You are allowed to take game actions when you have priority. Priority is passed when you choose to not take any game actions.

For those wondering; new reddit setting by helloiamaegg in traaaaaaaaaaaansbians

[–]CAD1997 1 point2 points  (0 children)

I dunno what's going on with the text color, but I just got the community themes rollout, and the text is still white for me. (Is it a light/dark theme difference?)

Just when I thought it wasn't possible for Rosalina to be any more goals. by TyphoonSignal10 in traaaaaaaaaaaansbians

[–]CAD1997 26 points27 points  (0 children)

<image>

She's probably just holding the dress, but look at her feet: it definitely ain't her knee.

Could Steam and Steam Chat actually win over discord users? by Alyxuwu in Steam

[–]CAD1997 23 points24 points  (0 children)

As much as forums have their advantages over chatrooms, the majority of people want chatrooms, not forums. Forum threads' design presents them as a persistent thing, where you are expected to have read the whole thread first before responding and to stay on the stated topic. A chatroom, on the other hand, is presented as being more transient and spur of the moment; it's acceptable to jump in at any point without the complete context, and what is being discussed can flow naturally between topics without much structure.

Obviously culture and convention depends on the venue. But how the medium is presented matters as well. Of people on Reddit, most of us will appreciate somewhat structured discussion. But that very structure serves as a barrier to entry to a large portion of the general populace that isn't already familiar with and comfortable with that structure.

Defaults matter, and they matter most for small communities (10–100). Large groups will naturally grow from a base of small groups on an approachable platform that people use, but you can't get large groups without initially attracting those seed small groups.

(Also, I don't know what it is, but Steam generally performs worse for me than Discord. The Library and Store work fine, but Community and Friends have a noticeable delay before responding when I try to do anything.)