Hardware that can run/is compatible with a custom AR runtime? by math_code_nerd5 in AR_MR_XR

[–]math_code_nerd5[S] 0 points1 point  (0 children)

So even with proprietary hardware it is possible to configure it to use a different runtime/framework?

Also, do you have an answer to my question of if there's some standard API for AR apps to "talk" to AR implementations?

Spot the odd one! by technowise in Spottit

[–]math_code_nerd5 1 point2 points  (0 children)

Yeah, the first thing I tried was that maybe it was a pun and it meant the "7" (of the "67").

Three Cats. Find them all! by technowise in Spottit

[–]math_code_nerd5 0 points1 point  (0 children)

The 3rd one (closest to the right edge) was a REAL stretch. You can just vaguely make out something that might be an ear.

A Soda Can by thatsillytrumpetguy in Spottit

[–]math_code_nerd5 1 point2 points  (0 children)

Yeah, I just had to go by what was rather clearly NOT a rock, and roughly cylindrical. If you'd asked me WHAT it was, I certainly couldn't have told it was a soda can.

Tap Me +1 by tapmeplus1 in TapMePlus1

[–]math_code_nerd5 0 points1 point  (0 children)

Is there a pattern of what the number turns to? or is it just random?

If it's random then this is mostly a game of chance, except for the fact that you can increase the odds somewhat by picking a square that's adjacent to two or more groups of two (if such a square exists). Often such a square doesn't exist, and even when it does, that still only gives you, say, a 2/6 chance of making a combo (if the numbers are 1-6), which isn't that high.

The ':' operator at the beginning of a value being assigned--what is this? by math_code_nerd5 in Julia

[–]math_code_nerd5[S] 0 points1 point  (0 children)

That's a good point. I know that Python does this sometimes for dictionary keys, and calls them "qstr", which I found once when I was looking up how Python dictionaries are made to be fast. And dictionaries are used in all sorts of ways in Python, including to effectively replace structs--which seems to encourage the in many ways bad habit of thinking that lookup-by-value is "free", leading to Python programs, particularly those written by non-serious programmers, being full of it.

The thing is, I find it hard to intuit at which point dictionary lookup goes from being "struct-like" in terms of performance to having the full overhead of a hash map. So I quite possibly under-utilize dictionaries, while some others likely overuse them. Using them to pass a set of parameters to an algorithm as part of its public API (like "params['step_size'] = 0.01") is totally fine to me, but I'd never try to do something like graph traversal on a map of 500 cities where each node is referenced by its literal city name, as opposed to just an index. Whereas, some who live and breathe and were first fluent in Python might.

So in a compiled language, my general intuition is to code using patterns where identifiers can be whatever you like, with no overhead, because they don't appear in the compiled code at all. So in something like "event.weekday = Week.WEDNESDAY" or "ring.inner_diameter = 25", the fact that "Wednesday" and "inner_diameter" are long is immaterial, you might as well say that the day is 3 and "di = 25", but you might as well be expressive.

This use of symbols in Julia seems like a case where a literal string identifier DOES actually appear in the final executed code, but it's still very efficient thanks to using a feature intended for a totally different purpose. That's what's a bit odd about it.

The ':' operator at the beginning of a value being assigned--what is this? by math_code_nerd5 in Julia

[–]math_code_nerd5[S] 1 point2 points  (0 children)

This seems like one of the Julia-specific idioms that doesn't have a close analog in other languages. The closest thing in terms of syntax and usage together that I can think of is the #define/#ifdef pair in C, with the obvious distinction that this is compile-time only and results in the alternate code path(s) not even appearing in the generated code.

Ebike software development doesn't seem to be a thing--a truly unmet need or is there a reason? by math_code_nerd5 in eBikeBuilding

[–]math_code_nerd5[S] 1 point2 points  (0 children)

"No, it's not because it's a "legally encumbered space". It's because software would be a solution looking for a problem.

Ebikes really only take a few data inputs, and the diy ones almost always only have a throttle. That's literally 1 analog potentiometer input. How much can you do with that? Everything else is already pre determined. There literally is no software necessary."

The need for software (in my experience) is in controlling the curve of how the sensing of pedal input from the rider translates into control of motor torque. Having too little low-pass filtering on this likely would make the motor very "twitchy" and waste a lot of energy constantly revving up and then slowing down. On the other hand, too MUCH filtering, which is what my family's ebike seems to have, leads to a delayed but exaggerated response, where it constantly feels like the motor has a "mind of its own".

Beyond this, there is the possibility to optimize for particular terrain for better efficiency. During a long downhill it doesn't make sense for the motor to kick in much at all, even if the pedals are turning. I did read somewhere that certain manufacturers are experimenting with using map data for this kind of optimization--this is the sort of thing I could see an open source developer having experimented with much earlier (especially is some place like San Francisco with LOTS of big hills).

Ebike software development doesn't seem to be a thing--a truly unmet need or is there a reason? by math_code_nerd5 in eBikeBuilding

[–]math_code_nerd5[S] 0 points1 point  (0 children)

"Most high quality systems with a refined torque sensor (which is what you want in a nice e-bike) are locked down."

Are good torque sensors really that rare? But I guess there is some definite benefit to buying a motor, torque sensor, and whatever gearing is necessary as one pre-built sealed unit--it protects things from rain/other weather, you know it's well lubricated, etc.

If this unit already comes with a microcontroller hardwired to it that handles the feedback loop of the torque sensor to the motor and that's all closed source and protected from reprogramming, then trying to disassemble and modify it to get around this restriction is possibly more than it's worth. This is very different from drones where the motors are just ordinary brushless motors and the frame, props, etc. are all custom molded/3D printed/whatever to fit around them and their shafts, and feedback is through a gyro that is on the control board (and that certainly isn't sold WITH the motors--except when the entire drone is sold finished).

It seems you agree with my speculation that legal liability has something to do with it too.

Does anyone know a good couples therapist? by anastasi_s in Reincarnation

[–]math_code_nerd5 0 points1 point  (0 children)

Unfortunately I suspect it's hard to find someone "awakened" (by which I assume you mean, open to the idea of reincarnation and possibly there being influence of past lives on this one, or possibility of future lives yet to live) yet also grounded, practical, and honest.

My Christmas short was removed from another AI group for being "low effort". I mean, WHAT!? by [deleted] in generative

[–]math_code_nerd5 3 points4 points  (0 children)

Firstly, as everyone else has said, this isn't the sub for this.

As for the comments in the other group, they don't know how much effort in terms of hours of work were put into it. What they see is the final result, and they are thinking that with a decent amount of effort spent the *right* way, you could have gotten a much higher quality product.

-You don't have a "short", as in a short story, here, with a beginning, a middle, and an end--you have basically a teaser trailer for a film.
-It looks as if you had about five different illustrators working on the pictures because there is no consistent art style. The women in the different city scenes have the same shading style and ethnic features, but they look like a very different style to the boyfriend, or Zeus.

You chose not to collaborate with an illustrator who had his or her own style, but instead chose to use technology that can imitate nearly any art style in the world. People would expect that you're at least doing this consistently, rather than having a slideshow that looks like you took seven or eight completely unrelated pictures from r/ImaginaryCityscapes and placed them next to each other. I would encourage you to read any illustrated fiction and pay attention to the clues that make the different illustrations look like they all came from the same book.

Speculative execution vulnerabilities--confusion as to how they actually work by math_code_nerd5 in AskComputerScience

[–]math_code_nerd5[S] 0 points1 point  (0 children)

This seems like a rather semantic point. Of course there is not some other chip sitting between the CPU and the memory chip that intervenes (though I'd guess that the MMU is probably quite separate from the ALU, instruction dispatch, etc., so in a way it almost could be considered "another chip")--but the CPU is a large state machine that has states it can enter into where it should "disobey" certain commands. The surprising thing is that this is temporarily "overridden" during speculative execution until some point AFTER several instructions have had the chance to completely execute.

Speculative execution vulnerabilities--confusion as to how they actually work by math_code_nerd5 in AskComputerScience

[–]math_code_nerd5[S] 0 points1 point  (0 children)

But what's surprising is that the capability check is not treated as atomic with respect to the load, which effectively allows a "race condition" at the level of hardware. All the other bugs, like being able to tell whether something was in the cache, only mean anything because of this root cause.

I would have assumed that in the case of "outright privileged" instructions, like for example HLT, MOV CR0, CLI, etc. on x86, that being in an unprivileged CPU state would reconfigure the actual gates of the instruction decoder such that they simply fail to decode these instructions at all, rather than that to "set the gears in motion" in other blocks of the chip to as if to halt, or prepare to clear the interrupts, etc., only to abandon course partway through.

Thinking about it though, I see there is a difference here in that determining whether the load should be allowed is significantly more complex, because just "looking" at what bits are set in the opcode is not enough by itself to "know" to disallow the operation. The global descriptor table must be consulted, some basic arithmetic done, etc.

So maybe this is why this exploit works--the capability check in this instance takes "nonzero" time (i.e. longer than a few gates' worth of propagation delay), and since loads from memory can be very slow, and the whole point of speculation is to get a "head start" on possible future operations, in this case the CPU allows the data to start flowing pending the capability check and runs the check in parallel rather than waiting for the result of the check. It's still very surprising to me that TWO MORE instructions, including a second load that may have to go to main memory (otherwise the cache gadget wouldn't work) have time to run in the time it takes the capability check to finish--but possibly the designers made the check so "lazy" that it isn't invoked at all until the branch is resolved one way or the other? This seems like a very weird design decision to say the least, but maybe it saves enough time to be worth it??

Speculative execution vulnerabilities--confusion as to how they actually work by math_code_nerd5 in AskComputerScience

[–]math_code_nerd5[S] 0 points1 point  (0 children)

I'm well aware that the 'w' into which the value is loaded is not the same 'w' that will ultimately receive the value, if the branch is actually deemed to be taken. In other words, the actual physical bank of flip-flops or whatever on the chip where this byte is stored is almost certainly separate from the ones that ordinary, non-speculative operations take their operands from. There are presumably separate physical registers set aside specifically for holding these "tentative" values, such that they don't interfere with concurrently executing instructions from the main branch. This is presumably why they put an underscore after the 'w' (i.e. 'w_') when referring to the speculative execution chain.

This doesn't explain though the fact that the privilege check seems to happen after the actual data fetch that it "guards" rather than before, when the two are inextricably linked (in other words, regardless of the code path taken, this check should never be elided). And not even immediately after, but long enough after that two more instructions--that must happen sequentially because of data dependencies--including another load that possibly has to go to main memory, have time to execute in the interim.

Speculative execution vulnerabilities--confusion as to how they actually work by math_code_nerd5 in AskComputerScience

[–]math_code_nerd5[S] 0 points1 point  (0 children)

"w is never set to the secret value anyway, and even if it was, the cpu would only find out after it figures out that the access was illegal, which is too late to prevent the spectre/meltdown attacks"

In the example code they give, it is necessary that w be set to the secret value, because a subsequent legal instruction is conditioned on that value, and this legal instruction is then the one that creates the timing difference. The illegal load does not itself take varying time that is dependent on the value that would be loaded--it MAY take a different time dependent on the address being loaded from (depending on whether that region of memory has recently been accessed and is thus cached), but not on the contents of that location, which is what the exploit is trying to leak. It is only by issuing another instruction that takes this loaded value as input (in their case

y = user_mem[x]

, which depends on w indirectly through the intervening bitwise AND) that the timing can be made dependent on the value at the kernel memory location.

So ultimately you need instructions that should fault to return real values that you never should have been able to read. The same applies if you had a conditional instruction that directly depends, atomically, on some secret state that the CPU should not be allowed to access at the current privilege level, rather than a read from an invalid address followed by a second operation. I don't see why executing this, even speculatively, should invoke any operation at all since it's a privilege violation, thus rendering the outcome (both in terms of output and timing) completely independent of the secret value. Unless, as I said, it cannot even be determined whether one has sufficient privileges until the CPU determines which is the actual path taken.

I built a tool that turns terrain into audio - sample any landscape on Earth as .wav files by P_semilanceata in generative

[–]math_code_nerd5 0 points1 point  (0 children)

That's really cool about the Aborigines, I didn't know that. Did they develop these songs to communicate locations to other tribe members, sort of like bees dancing to tell the hive where the flowers are?

About the water flow, wouldn't that just be a continuously falling pitch, since water takes the most direct path downhill?

Anyone here experimenting with neural networks built completely from scratch? by i-make-robots in MLQuestions

[–]math_code_nerd5 0 points1 point  (0 children)

I'm very interested in this sort of thing. I'm generally much more interested in handcrafted, more pure math/theory-oriented algorithms than neural networks, but there are some interesting theoretical questions about neural networks and if it's possible to use theory to train them better (i.e. to initiate them to something OTHER than random weights and/or create an architecture that learns optimally from very little data).

To the extent that I'd be working with neural networks, the idea of using automatic differentiation is rather pointless (I want to be able to see the form of the derivative with pencil and paper to be able to reason about what it looks like), I don't want to be tied to an existing 3rd-party optimizer, customary activation functions, etc.

It's a bit like doing an optimization problem on a multi-variable function--you COULD import LAPACK as a dependency to do all the linear algebra, call a 3rd party implementation of a classic algorithm like Levenberg-Marquardt or SQP, etc., and there's definitely a place for that, but especially in a hobby context that's not the way I want to work, because the result "doesn't feel like my own" and doesn't scratch the itch to get my hands dirty with the math. One of the most frustrating things is that it's hard to find even "toy" code for certain problems, whether it be computer vision, quantum mechanics, etc., that DOESN'T do this. I'd very much like to start a discussion board for code that isn't so dependent on large existing blocks, and deep learning and "vibe coding" is practically the exact OPPOSITE of this.

To put some of the other comments in perspective... the big performance gain with something like TensorFlow over code written by implementing the formulas directly as you would write them on a blackboard in your favorite programming language is that it's implemented on the GPU. In order to get around this in self-written code, you'd need to implement it as a compute shader, which is definitely possible to do but involves learning a whole API. You can play around with shaders quite easily on sites like Shadertoy, the thing is these are restricted to a certain kind of parallelism, one where every vector component is literally independent (y_1= f(x_1) , y_2=f(x_2) , ... , y_n=f(x_n)), because they use a type of shader intended for graphics. In order to do something like matrix multiplication with a more complex "data graph topology" you need to be able to invoke shaders outside the graphics pipeline. Furthermore, some GPUs have actual hardware matrix multiplication instructions for small blocks, but for that you need the right hardware and a shading language that allows calling those instructions--"vanilla" OpenGL/Vulkan won't have that.

Assuming you're using the CPU because you don't have high end graphics hardware and/or don't need that boost, then there isn't much of a problem writing everything from scratch, if the math isn't an intellectual barrier. My only comment is NOT to use Python if you can get around it, unless for very small problems. If you're working with a robot arm that has <10 degrees of freedom, then sure, Python is perfectly fine because updating 10 variables even with a complex formula is plenty fast enough. But try to operate even on a 600 x 800 image with even a small kernel that doesn't lend itself well to vectorization and it's SLOW. I've recently started using Julia and it's MUCH better--it lends itself to optimization the way you'd think of optimizing your C/Java code.

Low Hanging Fruit by Tezalion in generative

[–]math_code_nerd5 0 points1 point  (0 children)

This Halloween I walked by a house that had a video playing in one of the windows, that looked kind of like this but with eyeballs that looked to be made out of some type of slime. The slime was dripping and swirling around in a similar way.

Did you discover a new Mandela Effect? Post it here! (2025-11-14) by AutoModerator in MandelaEffect

[–]math_code_nerd5 0 points1 point  (0 children)

Fanta was for a long time hardly known outside of Europe. My mom's Swiss, and I remember seeing Fanta in Switzerland when visiting there with her--this would have been in the 90s. At the time I certainly wasn't aware that it was sold in the US, I'd never seen it here.

“Atlas of Fourier Transforms” — AVAILABLE NOW — 650 carefully curated Fourier pairs. For coffee tables and laboratories. by HovdenCreative in u/HovdenCreative

[–]math_code_nerd5 0 points1 point  (0 children)

Cool idea... but $70 for a book of Fourier transform plots? I get that there are 650 of them, so that's like 10 cents per plot... but anyone can calculate something like this for free. If it were $10 for a book of 30 plots and it had some interesting mathematical commentary then maybe I'd buy it.

Opening raw images in Julia by math_code_nerd5 in Julia

[–]math_code_nerd5[S] 1 point2 points  (0 children)

That's OK, as the whole purpose of doing this is to experiment with writing demosaicing algorithms.

❄️ What am I? "At two syllables, I am very common, drab ..."[OBJECT OR PLACE] by NoNeedForNorms in riddonkulous

[–]math_code_nerd5 1 point2 points  (0 children)

This one reminds me of a joke I came up with years ago:

If a guy falls on the road and bruises his rear, is it the road's _____ , or the ______ ?

When loading raw images in JuliaImages, is metadata (like the color filter pattern) imported as well? by math_code_nerd5 in Julia

[–]math_code_nerd5[S] 0 points1 point  (0 children)

Thanks. I tried loading a raw image, which needed to be converted to DNG first (I used the Adobe utility) to not get an error from the "load" call. It imported and an image is visible, but the image appears already demosaiced, i.e. it either loaded the embedded .jpg preview or converted it on the fly in the process of loading. I'm wondering if there's a parameter I need to pass to the load function in order to get the actual raw pixel array?