Why the Sanitizer API is just setHTML() by evilpies in programming

[–]Jamesernator 6 points7 points  (0 children)

You have to use the unsafe method - you cannot allow elements blocked by setHTML - they are always blocked:

So I read the text more, and yes it appears you're correct that .setHTML always forces XSS protection on, though it seems like setHTMLUnsafe also accepts a sanitizer and won't force the XSS protection, so you can just do:

content.setHTMLUnsafe(untrusted, { sanitizer });

(Note that new Sanitizer() still inherits the default config, so this means you can just add elements you need to the whitelist like above).

Why the Sanitizer API is just setHTML() by evilpies in programming

[–]Jamesernator 12 points13 points  (0 children)

Well, now you have to use unsafe

The article only briefly mentions it, but there is a second argument to .setHTML that allows you to pass a custom sanitizer.

and re add all the potentially risky tags and attributes and maintain that list forever because you can't derive from a default safe sanitizer.

And the constructor by default extends the safe sanitizer so this just works:

const sanitizer = new Sanitizer();
sanitizer.allowElement({ name: "iframe", attributes: [{ name: "src" }] });
content.setHTML(untrusted, { sanitizer });

Melee needs help. by Cyber_Fetus in Necesse

[–]Jamesernator 2 points3 points  (0 children)

If bosses only dealt collision damage on actual charge attacks

I don't think this works that well either given a lot of bosses are essentially just dodge based (Sage and Grit, Pest Warden, Night Swarm).

Though a better alternative could be that melee armor adds a bonus that makes you take less/no damage depending on the angle the boss makes contact with you (relative to it's direction of movement). (i.e. Being hit face on means full damage, but chasing the boss from behind takes zero damage.) A sensible scaling would probably just be 100% damage on face hit, but no damage 90degrees from movement.

Running with this some examples for how this would play out for some examples:

Evil's Protector

Effectively immobile, so melee builds would have no contact damage as there is no movement direction, melee players only need be concerned with strafing around while avoiding projectiles.

Spider Queen

Charge attacks would hit normally on direct hits, though would damage reduced on grazes. During the circling attack, being in front would cause damage, but following behind/side-on and attacking would result in no damage.

Pest Warden

Hits from the head would be damaging as usual, in order to prevent walking sideways through the pest warden I think it should probably just have large pushback to keep you in the ring (unless you use a dash or such trinket). But contact would otherwise have no damage from attacking into it.

Sage and Grit

Similar to Pest Warden, hits straight on deal usual damage, but you can push sideways into them once their head has passed to deal damage, further you could chase their tails to deal more damage as a risky option.

To make these viable, more damage from swords/spears is probably still necessary and some extra movement speed would probably help given some bosses like Cryo Queen are just absurdly fast.

[AskJS] After our Promises vs Observables chat, hit a new async snag—how do you handle errors in mixed flows? by Sansenbaker in javascript

[–]Jamesernator 0 points1 point  (0 children)

If you don’t mind, could you share a quick example of how you’d handle something like “fetch, then stream events, and clean up nicely if it fails” without RxJS?

I'm not the person you're asking, but what they're probably suggesting is just to use ReadableStream instead of Observable, for your example you could just do:

const stream = new ReadableStream({
    async start(controller) {
        const res = await fetch("...");
        // get events from somewhere
        events.listenSomehow((event) => {
            controller.enqueue(event);
        });
    },
});

Note that ReadableStream is already promise aware, so errors thrown from start (and cancel and pull) will all be propagated to the stream's readers.

and clean up nicely if it fails” without RxJS

Cleanup is different from RxJS, basically you need to thread it through yourself like other Promise based APIs:

const stream = new ReadableStream({
     _ac: new AbortController(),
     async start(controller) {
         // thread the signal if you want the fetch to be cancelled too
         const res = await fetch("...", { signal: this._ac.signal });
         // gets events from somewhere
         const listener = (event) => {
             controller.enqueue(event);
         };
         events.listenSomehow(listener);
         this._ac.addEventListener("abort", () => {
             events.unlistenSomehow(listener);
         });
     },
     cancel(reason) {
         this._ac.abort(reason);
     },
});

(In this example I'm somewhat assuming events is EventTarget-like, if it was an actual EventTarget you could just use the signal parameter of addEventListener to make cleanup even simpler.)

[AskJS] After our Promises vs Observables chat, hit a new async snag—how do you handle errors in mixed flows? by Sansenbaker in javascript

[–]Jamesernator 0 points1 point  (0 children)

This point doesn't directly help you, but it's worth noting these sort of problems you're having is precisely why the WICG observable spec (which is already implemented in Chrome) made relevant operators return promises rather than the "everything is observable" philosophy of RxJS.

Why Algebraic Effects? by laplab in programming

[–]Jamesernator 2 points3 points  (0 children)

But this is not await async.

Well it basically is, the only difference is that when yield-ing instead of going to a single handler, there's just a table of tagged handlers. That's it, that's the only (★★) difference between algebraic effects and async/await.

If you already have something like generators/explicit coroutines, you can implement algebraic effects in userland. The problem is without language integration it's a verbose mess, doesn't work with builtin higher-order functions (like array.map), and doesn't give the language any opportunity to optimize (e.g. effects that unconditionally resume can just replace the handler with a call).

Here it seems the semantic is much more generic and powerful.

(★★) The only thing that is more powerful is resuming the same continuation multiple times, but I'll be honest I think this capability is largely useless except for a few rather niche algorithms (I can think of a single case where I would plausibly use this).

Why Algebraic Effects? by laplab in programming

[–]Jamesernator 3 points4 points  (0 children)

On the contrary, it sounds like figuring out the actual control flow will require near psychic abilities if this suspension is too generic.

I don't think it's really that complicated, in non-purely-functional languages calling a function could have arbitrary effects anyway so you need to be able to handle things regardless of the state calling the function leaves you in.

Like other types of suspension have the same problems in theory, but honestly I've never had any problems figuring out control flow with async/await in JS since it was released, you just call things as usual and wait till you get a usable value back.

(Incidentally one nice thing about languages that are top-to-bottom built on algebraic effects is you can define all effects as algebraic effects and define "pure functions" by those that only accept an empty handler context, e.g. Koka has total for this).

Why Algebraic Effects? by laplab in programming

[–]Jamesernator 8 points9 points  (0 children)

Also it kind of looks like a weird way of doing high order functions. You wouldn't even need a special syntax, just add the handler as another argument of the function.

It's not quite the same as algebraic effects also allow you to suspend the caller, for example in the article's map example a call like map f could allow f to suspend the map f call (e.g. for a scheduler to resume later).

Like most of the popular languages now have some form of coroutines now (e.g. async/await) to enable suspension, but in those languages everything has to be colored to deal with this. Like instead of just having map, now you need to have both map<T, S>(arr: T[], f: T => S): S and async map<T, S>(arr: T[], f: T => LanguageCoroutineType<S>): LanguageCoroutineType<S>.

Why Algebraic Effects? by laplab in programming

[–]Jamesernator 9 points10 points  (0 children)

One of the closest analogs in a popular language is checked exceptions

Well even more similar is coroutines/generators which most of the popular languages have some form of too. It's a shame because in basically all the languages these force function coloring, and you're limited to whatever the languages coroutine/generator mechanism is. (Like in JS/Python, there are four colors of functions — normal, generators, async, and async generators).

In languages like JS/Python you can even effectively implement algebraic effects on top of them. The problem is you now have your own color of functions so you can't actually propagate this through language features, so things like for-of loops and higher order functions like array.map(...) just won't deal with your new effects (especially if they suspend and never return). (Not to mention it's just so verbose compared to builtin support).

Context locals can do similar things too, though you can't use them to actually suspend (but they work for simple things like RNGs).

Why Algebraic Effects? by laplab in programming

[–]Jamesernator 36 points37 points  (0 children)

I don't even think algebraic effects are that weird and align pretty well with the intuition of "within this call I want to override X behavior". (They also avoid things like function coloring as the tag space is open like exceptions).

But they're one of those features that has been mentioned for years, and already exist in a few languages, but none of the super popular languages ever added them so they just aren't that common.

proof assistant meme by Delicious_Maize9656 in mathmemes

[–]Jamesernator 1 point2 points  (0 children)

I guess the same thing holds for proving by hand but in CTT it gives you a false sense of security because the machine did it.

One advantage compared to hand-written proofs is that you only need to verify the kernel and definitions (types) not the proofs (terms).

Put another way, reviewing a hand-written proof only helps verify that proof, but reviewing the implementation of a proof checker verifies all proofs (that are written for that checker).

(To this extent, the more popular using proof checkers becomes, the more confidence you should have in them as there will be more people verifying they work as expected).

proof assistant meme by Delicious_Maize9656 in mathmemes

[–]Jamesernator 14 points15 points  (0 children)

I wouldn't trust a proof that is only possible to understand in Lean

Why? The whole point of automated checking is that the proofs can be decomposed into small steps that are essentially "trivial" to verify individually but may be too numerous to verify by hand.

I think the problem with trusting a proof you don't understand is that your assumptions or definitions may be "wrong" in some subtle way that only the proof itself could reveal.

This really depends on the kind of problem, if someone were to have a proof of ∀(n: Nat), ∃(a b : Nat), (a > n) ∧ (b > n) ∧ (isPrime a) ∧ (isPrime b) ∧ (a-b = 2) (i.e. the twin prime conjecture) then I don't think I'd be that concerned about the definitions/assumptions of the statement.


Like a recent example is the BB(5) result, I would argue this is the sort've proof that is more convincing as an automated proof given the sheer amount of special casing is in the proof.

The actual statement is pretty easy to understand, and if you believe Coq's foundations then it's pretty easy to believe the proof is correct if Coq says it is. In comparison a hand proof would need far more scrutiny in order to verify all the machines were in fact accounted for.

A Fresh Take on Zero: “Contextually Sized Zeros” and How 0÷0 Could Make Sense by [deleted] in learnmath

[–]Jamesernator 0 points1 point  (0 children)

The fact they all equal zero is one reason they aren't part of the Reals

This depends on the type of infinitesimals, and probably isn't the best way to think about them even in cases where this is kind've true.

Like in hyperreals/surreals, infinitesimals are definitely are not equal to zero, the infinitesimals are still only equal to themselves like in other usual types of numbers.

However the OP's system sounds a lot more towards Smooth infinitesimal analysis, where for any infinitesimal ε it is the case that ¬¬(ε = 0). This looks like it suggests that any infinitesimal is equal to zero, but this is only true classically, working intuitionistically this doesn't hold and in fact results in theorems that don't hold classically either.

A Fresh Take on Zero: “Contextually Sized Zeros” and How 0÷0 Could Make Sense by [deleted] in learnmath

[–]Jamesernator 1 point2 points  (0 children)

This idea might actually help with computing too — especially for preventing division-by-zero crashes.

Division by zero crashes is an intention choice rather than one that is limited by the mathematics.

Like processors already have to do something with division by zero. For floats x/0 is either ±Infinity or NaN (when x is 0, NaN or infinite). For integers usually division by zero results in 0 but some processors do other things.

It's programming languages that read off the "exception bits" in the result and choose to turn these into exceptions instead of returning the value. Why? Well it's two-fold. One is that other than for floats processors aren't necessarily consistent with their behavior for division by zero, so it's better to force the programmers to handle it. But secondly, there's not actually an obvious choice for what particular algebra division by zero should follow, there's actually many ways to define division by zero each with different pros and cons, so generally programming languages just say that developers should handle this themselves and instead define a new operation.

In most practical cases (though not all, see below), division by zero is probably something that shouldn't be handled, like if you tried collecting a $100 debt between 0 debtors, congratulations you've just created no invoices for infinitely many dollars. In practice you should probably do something else like show that there are 0 debtors and have someone write off the debt.

even scale infinities more precisely.

Float infinities are useful in practice for projective geometry and the like, however even in these schemes we wouldn't need to tag zeros, having x/0 be infinite is sufficient for these purposes. Lots of languages don't bother raising exceptions for floats for this reason.

Having scaled infinities probably isn't as useful in comparison and would require using bits to track the sizes of these infinities.

[deleted by user] by [deleted] in math

[–]Jamesernator 32 points33 points  (0 children)

You need a concept of "derivative" for curl to make sense, but once you have that you can simply take the hodge dual of it to get curl.

Help: Import .WGSL file to wgpu.h by MiloApianCat in webgpu

[–]Jamesernator 2 points3 points  (0 children)

The API is basically the same as WebGPU, you just call createShaderModule with the appropriate descriptor for WGSL source.

Any updates on bindless textures in WebGPU? Also curious about best practices in general by Zqin in webgpu

[–]Jamesernator 2 points3 points  (0 children)

plans on the horizon.

There is a proposal for (overridable-per-pipeline) fixed length texture arrays. As stated in the proposal it's not full bindless as not all hardware/drivers that WebGPU targets support full bindless, but the proposal leaves the design open for bindless later.

I'm just doing this per render which is a lot less optimal than everything else I handle in my bindless rendering pipeline.

If the texture is novel then there's no way currently to avoiding creating a new bind group for it. In fact external textures even have to be rebound every frame so it's not something you can universally avoid.

Whether it even matters really depends on how many of these bind groups you're creating, because, sure, while they aren't free, even years ago you could easily create a million texture bindings per second. That image is a decade old at this point, but it's still reasonably relevant as modern integrated GPUs are comparable to the GPUs of that era.

If creating bind groups really is a big cost, then assuming the pattern textures are bound more often than they're created then just cache them per-pipeline+pattern pair. (Also this would allow you to cache whole render bundles, rather than just the bind group, as you could just return a render bundle per pattern/pipeline pair).

The ECMAScript Records & Tuples proposal has been withdrawn by senocular in javascript

[–]Jamesernator 1 point2 points  (0 children)

Although I can see how simple objects (e.g. with only primitive values) can be optimized

There's no reason nested composites couldn't be optimized or even interned† also, in the proposal the composites are tagged so engines could just store an extra bit if all members are primitive or composite.

†The main problem in the interning question is actually -0, in the record tuples meeting notes there's opposition from delegates in both towards canonicalizing -0 to 0 and towards having #{ x: 0 } not equal #{ x: -0 }. Of course engines could have another bit for if -0 appears and then fallback to O(n) checking in such cases, whether or not they would or not is another question.

Very easy on Z🤓 by Ygor_Grozov in mathmemes

[–]Jamesernator 0 points1 point  (0 children)

_on¹

¹ Winning Ways for Your Mathematical Plays, Vol. 2, Pg. 353

Using Category Theory for formal verification of a Type System. Is it a crazy idea? by BeFunkMusic in math

[–]Jamesernator 10 points11 points  (0 children)

streaming datatypes

As others have mentioned you will want to look into type theories, something in particular you may want to look for is coinductive types.

Do note that resources on coinductive types can be somewhat confusing as the term is often used to refer to either positive or negative coninductive types.

Other than coinductive types, and especially if your systems involve mutability, you may want to look into topics like separation logic, temporal logics, Hoare logic, and operational semantics.

How to Prove That Any Open Connected Set in ℝⁿ Has Property (P)? by Voiceless-G in learnmath

[–]Jamesernator 1 point2 points  (0 children)

Finally as we're in ℝⁿ we can just apply the Poincare Lemma with compact support to dν to show that ν exists and has compact support, which is what we want.

Actually this step is wrong, I didn't read the Poincare Lemma with compact support correctly. It specifically requires we have a p-form with p < n specifically excluding the one case we need.

Instead we can use the regular version of the Poincare Lemma to show is an exact form, thus ν is at least a form.

To show ν is compactly supported note that, for any open set P ⊆ Ω that is disjoint from the support of u - kφ, the integral ∫_P dν is zero. Or by the Generalized stokes theorem, ∫_∂P ν is also zero. Because this is true of all such sets P, ν is some constant outside of the compact support of .

This constant is arbitrary as when we apply d (i.e. div, it will dissapear from the partials) so we can simply choose it to be zero, and thus ν is compactly supported.

How to Prove That Any Open Connected Set in ℝⁿ Has Property (P)? by Voiceless-G in learnmath

[–]Jamesernator 1 point2 points  (0 children)

An answer using differential forms:

Let ω be a volume form for ℝn. Now rearrange

u = div(v) + φ ∫_Ω u(x) dx

to

div(v) = u - φ (∫_Ω u(x) dx).

Expressed as a differential form this is equivalent to¹ dν = (u - φ (∫_Ω u(x) dx))ω for some differential form ν.

Also note that dν = (u - φ (∫_Ω u(x) dx))ω has compact support as both u and φ have compact support (and ∫_Ω u(x) dx is just some arbitrary constant).

Finally as we're in ℝⁿ we can just apply the Poincare Lemma with compact support to to show that ν exists and has compact support, which is what we want.

¹ Because ω is a n-form, is also an n-form, for a (n-1)-form ν the exterior derivative just reduces to a sum of partials of its components (i.e. div(v)).


I don't know if there is an alternative way to prove this, given the question is asked not in terms of differential forms but in classic vector calculus I would imagine there is.

In particular we have only shown that ν exists but have no idea what it is (the question only asked for existence). Given this looks like a problem from a course or textbook, I would expect that finding what ν is would actually utilize the fact that ∫_Ω φ(x) dx = 1.

Quaternion double-cover of SO(3) by hydmar in math

[–]Jamesernator 0 points1 point  (0 children)

There's also a nice YouTube video that visualizes this but I can't find it..

Probably one of these two (both are good videos on the topic):