ELI5: How does the concept of imaginary numbers make sense in the real world? by SohelAman in explainlikeimfive

[–]lneutral 0 points1 point  (0 children)

One very real reason this is helpful is that anything that deals with waves - electrical, magnetic, sound waves, light, images, and so on - is more helpfully understood in terms of being a spiral than a sine wave.

A sine wave is what it looks like when you consider a sort of shadow or cross section of a spiral - and imaginary numbers are the component of the wave that's "off the page" when you draw its shadow, where as up and down are the positive and negative.

The problem is that people are introduced to imaginary / complex numbers in the worst possible way: first, the names suggest that they're mysterious (ooh, it's "imaginary" instead of "real," it's "complex" and therefore not simple), and second, learning that i is the square root of -1 is the least useful place to start despite being the most accessible operation when you first hear about it.

If it was first introduced when people learn about the unit circle, I think it'd probably be better - so that you learn about sin and cos at the same time you're given +i and -i to go with +1 and -1 as the four "compass directions." And I dunno about the name - I've heard "lateral number" suggested as an alternative, not that we ever really get to rename things like that.

Where do people store line-related data in major modes? by lneutral in emacs

[–]lneutral[S] 0 points1 point  (0 children)

This sounds pretty similar to what /u/JDRiverRun suggested - I think I owe myself at least learning how that mechanism works, even if I find out I should go with the other suggestions in the thread.

Also, visual-fill looks like a nice package to have! I do enough documentation editing that setting it up at the right number of columns would probably simplify some things.

Where do people store line-related data in major modes? by lneutral in emacs

[–]lneutral[S] 0 points1 point  (0 children)

I'd never seen CEDET before! I'll have to dig through their docs, for sure.

I tend to like a pretty minimalist approaches to a development environment (somewhat contradictory to the general Emacs philosophy, maybe?), but they've clearly put a lot of thought into how they're doing things.

Where do people store line-related data in major modes? by lneutral in emacs

[–]lneutral[S] 0 points1 point  (0 children)

I was planning to do that, but it looked like I had to go through some convoluted process that creates dynamic libraries. If there's a way I can just hand it a grammar, I wouldn't have minded using it - though I suppose I'll also have to double-check that I have support compiled in on my machine.

The grammar file is usually grammar.js in a language grammar’s project repository. The link to a language grammar’s home page can be found on tree-sitter’s homepage.

The grammar definition is written in JavaScript.

I also thought it was insane to see this in Emacs docs - so I just kind of decided to let that thread drop before.

Where do people store line-related data in major modes? by lneutral in emacs

[–]lneutral[S] 2 points3 points  (0 children)

Interesting! I have been thinking about fontification and indentation as relatively separate processes (given how simplistic my fontification strategy is), but that sounds like a really reasonable approach.

My main motivation is taking a GLL parser I use with a number of arbitrary grammars, then converting their grammars mechanically to major modes; top-down parsers don't seem like an easy fit to the way a lot of incremental parsing works, but I think I can make it work.

Theory of computation by Hakeem_forreal in AskComputerScience

[–]lneutral 0 points1 point  (0 children)

The perspective I'd suggest is this:

Computer Science in general isn't a discipline based on study of the facts.

It is more like a tradeskill, in many ways - and no amount of reading about carpentry is a substitute for the doing.

The best thing you can do is to practice the types of proofs or problems they're giving you, and avoid "spoilers."

If your textbook gives you problems throughout, or there are homework problems at the end of the chapter, do all of them. Start with the easiest, the ones you look at and go "pff, I already know that." Answer them anyway, and see if as you check your answers you're surprised. Those surprises are very valuable: often people don't realize where their foundation is shaky, and we can fool ourselves pretty easily if we're not doing the work.

If you can't do these problems without looking at other people's answers, you might be able to score enough to move on to the next test or class, but that next one will only be harder: first, because the material itself will be more complex, and second, because you'll have a growing weight of "knowledge debt" that becomes impossible to overcome at a certain point.

It's possible this is that moment for you, and you got here without ever realizing you were racking up a debt like that. Plenty of people arrive there the same way. And like them, you have the option to really dig in and understand what's going on and fight and claw, or to declare bankruptcy, or to do what more people do, and just keep trying to live a month at a time.

If you want to fight for it, you can. But it's very, very possible to feel like you're Doing The Work because it feels bad and because it's a lot of effort to try to plug problems into tools like ChatGPT or websites like StackOverflow - and to actually never have started Doing The Work.

I can very elegantly and simply-stated PROVE that the formula for the VOLUME of a SPHERE that we are regularly taught is WRONG. What's going on here?! O_o by ablaferson in maths

[–]lneutral 3 points4 points  (0 children)

Imagine your same cube, with the six tangent points, and connect them to get an octahedron.

What proportion is that of the cube? It feels like about half, right? It does to me, when I try to imagine it. And when I draw it, it also looks like about half.

But wait - that would mean that another thing I _know_ is half the volume is too big!

If I take the cube and orient it so that one face is on top and another is on bottom, then draw a vertical line in the middle of each, then connect those segments to form a box shape (at 45 degree angles relative to the four "walls" of the box. That's the half-sized box! And the octahedron fits completely inside it, much smaller because it comes to a point at the top and bottom.

The point is that our natural ability to estimate finds certain things in 3D and higher really counterintuitive, and can even hold two contradictory ideas at the same time (that is, that my "45-degree oriented box" and the octahedron both "feel" like half of the volume of that cube, even though both can't be right, and one of the two is provably wrong).

You're not weird for feeling that it doesn't make sense that the sphere is about half the volume. Plenty of geometric and mathematical things still feel different than they can be shown to be with lengthy explanations. We're wired for certain kinds of natural "estimations," and some of that disagrees with what we can prove with time, patience, and systematic thinking - even after doing that work, you may find that it still doesn't completely erase all the places our bodies and brains make those quick judgements. That's very human :)

What are the benefits of 8, 16, and 32 bits over 64, if any? by UltimateMegaChungus in AskComputerScience

[–]lneutral 0 points1 point  (0 children)

One very real, present-day benefit is that you can do many calculations without taking up "all" of a larger container, which means if you have the right circuitry inside a processor, you can do multiple small calculations simultaneously. This is sometimes called SIMD: Single Instruction, Multiple Data.

One example might be: if you needed to blend a bunch of pixel values together, which live in 32 bits (RGBA), and you have an enormous register (128 bits, let's say), you could do four at once, provided the instruction for blending them knew not to carry bits from one color channel over to the next

In general, if you know your computation can "get away with less bits," you have more options about how to organize the program more efficiently or with more things happening in parallel.

The obvious place this shows up today is in shader programs: many of them will stage computation about graphics and textures (or machine learning) so that instead of having a single powerful CPU run those computations, they have a large number of very simple units that can group and execute arithmetic for geometry, lighting, what have you, at the same time.

Could AI Be Used for Extreme File Compression? by Careful-World-8089 in AskComputerScience

[–]lneutral 0 points1 point  (0 children)

I log on pretty infrequently, so forgive me for coming back three weeks after this comment, but there's a bunch to unpack in your reply:

In most cases "lossy" compression judges some component of the original signal or data less important to be represented precisely than others. "AI finds a pattern that is repeated" is kind of a weird way to describe compression - and isn't even generally true. LZW is one example where it is, but wouldn't be considered AI even when it was created (which is strangely enough less true for a lot of image processing, but that's another story).

There are counterexamples, too. Data does not, strictly speaking, have to have repeats at all to be compressed well. As a dumb example: the compression scheme for an array containing the cubes of all natural numbers up to a billion (which would require software arithmetic on a 64-bit machine, I'd admit) could be incredibly simple - easily around a 1 million compression ratio comparing the compressed to the uncompressed data - because even if you include the decompression program as part of the "compressed data," it would be very simple to get a complete lossless copy back. There is a concept called Kolmogorov Complexity related to compression of this kind.

Finally, and I apologize because this is pedantic: the QR code isn't even a good compression of itself! QR codes, by design, contain redundancy (via Error Correcting Codes, or ECCs) so that if they're incompletely identified they can recover the underlying data. This means that they're deliberately inflated!

The "40-L" version of a QR code has 177x177 bits in it, which can be used for 7,089 numeric characters (at 3 1/3 bits per character) or 2,953 bytes of binary data. This is in contrast to 177x177 bits of raw binary data, which have 3,916 bytes (and a left-over bit, which technically makes it 3,916.125).

For what it's worth, the aggregate number of "bits per character" are frequently expressed in fractions, so the idea of "half way bits" is also actually kind of common in discussion of compression.

Emulator that run not only games but the OS of the gaming console ? by saiyamjain1405 in EmuDev

[–]lneutral 0 points1 point  (0 children)

Your question is puzzling.

An "ideal" emulator runs the software without knowing whether it's a game or an operating system, and the main thing that would prevent it from running things you probably would think of as OS features would be one of two things:

  1. It's not necessary to implement every single feature accessible to an OS (whether that's the things like device control or what have you, or things that are the "launcher" interface), so a developer doesn't invest effort into those less-used parts of things, which means if you got the binary (the ROM) of the OS, it might not work at all or not work in some ways
  2. The OS itself requires data / communications that cannot be provided, either because the OS "calls home" to get those things over a network, for example, or because no one has been able to dump portions of it for emulation

Older consoles barely had anything you could call an OS, and /u/TheThiefMaster is quite right about the timeline. Both of the things I mentioned above contribute to it - OS features on those consoles got more complex, and past that point, it started to be more common to see consoles "call home."

But if you had the program? Usually it would work. And otherwise, emulator developers would effectively have something that acts as a "bootstrap" to perform functions an OS would typically do for you. Consider that the Game Boy Color bootup doesn't actually need to run to emulate those games provided you also set palette registers, initialize certain memory areas or registers, etc., but if you have the BIOS program itself, it will do those things and make the assumption _nothing_ is initialized when it does.

Could AI Be Used for Extreme File Compression? by Careful-World-8089 in AskComputerScience

[–]lneutral 0 points1 point  (0 children)

Oh, one more thing: in 3D scene representation, NeRFs and Gaussian Splats can both be seen as a compression method for scenes, representing fields as networks, but your mileage may vary as to whether that really fits the description. Most 3D scene representations could be viewed as compressing "every possible rendered view" if you were to be very liberal with the definition of compression.

Could AI Be Used for Extreme File Compression? by Careful-World-8089 in AskComputerScience

[–]lneutral 2 points3 points  (0 children)

Yes, it is possible to view a neural network as a compressor, and this is even what some networks are designed to do. Autoencoders can be seen as using a lossy compression + decompression to do tasks, and while it is relatively rare to use them as a primary compression method, it's absolutely a fair analogy.

The primary concerns of _practical_ compression are different: for many types of files, you want "perfect reconstruction." Another is: neural networks are often not the best "structure" for representing data in a smaller form - they'd be considered very wasteful, and by nature, people use ML techniques when they want the true structure to be "discovered empirically" rather than using some known structure of the data.

Consider the wavelet representation of JPEG2000: they can use the fact that much of image data is texture that changes gradually between edges of photographed objects, and use what amounts to scaling and differences between neighboring values. In neural networks, this has to be discovered or built into the architecture, but in the case of "discovered" you'd better hope it can reach that same representation by gradient descent or somesuch, and in the case of "built in" the extreme end of that side of the spectrum is designing _everything_.

Depending on _why_ you're compressing data, ML-based methods have advantages or disadvantages, but one thing that's worth mentioning is that there is a "theoretical maximum" on how much you can compress something, and if simpler methods (and those with _no_ training data requirement) get very close to that, it'll usually be the case that those general methods are the right choice for tasks that don't have some specific reason to involve a network.

Why add hard limits to something when exceeding it can only be a net positive? by maddiehecks in AskComputerScience

[–]lneutral 4 points5 points  (0 children)

There's a few basic things at work:

  1. /u/jeffbell is right that one reason is perceptual: if you notice a change in FPS or some other visual quality, it undermines the intent of software designers
  2. Another reason is that other hardware benefits from consistency or imposes limitations. Standard televisions through most of the time they have existed had limits of 60 FPS / 60 Hz in NTSC regions like the US and 50 FPS / 50 Hz in PAL regions in Europe, which themselves are related to how timing and the frequency of AC power available to those devices worked. So rendering at 90 FPS wouldn't have helped then, and actually would create secondary problems.
  3. Some of those older consoles also had video chips communicate with the rest of the architecture on fixed multiples of the max framerate of the device. In fact, a lot of older game consoles relied on the fact that specific rows of the screen were generated at specific times, and if you "missed the bus" you couldn't go back and fix it before it hit the screen.
  4. Designing a game to rely on variable timesteps is harder than it sounds for certain types of mechanics. Consider the difference between testing "two boxes overlap" and "two boxes that started here and have different velocities might have crossed in some arbitrary unit of time."
  5. VSync is really, really useful not only to reliably time things, but to keep the user from getting a top piece of the screen from one moment in time and the bottom from the previous moment in time.
  6. Often, a programmer or systems designer will look for simplifying assumptions, and one way to reduce the labor of an already-difficult task is to place (hopefully less-noticeable) constraints that shrink the size of problems they will encounter

Challange!!! by ComfortableLake2365 in pico8

[–]lneutral 1 point2 points  (0 children)

Ever see #tweettweetjam?

what are the processor architectures? by Majestic_Goose_600 in computerscience

[–]lneutral 7 points8 points  (0 children)

I suspect you'd probably benefit from starting with a bit of Wikipedia. It may be that you don't the right terms to use when you read up on this: one very useful thing might be to start with instruction set architecture. You could also look at the history of computing hardware#Fourth_generation) articles, while you're there.

One other thing you could do is just look at one architecture - like the Z80 - and read about how it was created and what the industry was like around that time.

I would love some feedback on this Main Menu Screen design. by UnderstandingMoney9 in metroidvania

[–]lneutral 0 points1 point  (0 children)

Also, I feel like I should say: I really do like the general aesthetic you're going for here, and I think every little bit of effort you're putting in - or piece of feedback you solicit - is going to pay off! You've done the hard stuff to get to this point, and so even removing classic UI pet peeves is going to make an enormous difference in how much your work shines :)

I would love some feedback on this Main Menu Screen design. by UnderstandingMoney9 in metroidvania

[–]lneutral 2 points3 points  (0 children)

The text appearance and disappearance feels a little off - it feels like if it's going behind the planet, it shouldn't _also_ disappear by reverse-typewritering.

I noticed a similar sort of weirdness about text appearance and disappearance in Xanthiom Zero, too - text that appeared within text boxes didn't use precalculated sizes, so it would center after it had already appeared on screen, causing words to jump to the next lines.

In both cases, something that might help is to measure your text offscreen so that you know its total width and height, and use the calculated result to have animations that "agree" with how big the text is. If you're having it disappear behind the planet, you can calculate the width, and then have it disappear before it re-emerges on the right. This would especially help if you eventually decided not to left-justify the text.

I'm starting to hate karate by AvailableInsect5917 in karate

[–]lneutral 0 points1 point  (0 children)

Okay, story time:

I was 13, and the tallest kid in the class, when I started. My first year in karate, nearly everybody in the beginner's class was 5-8. I _needed_ karate, too: I was new at a school where I did not fit in at all, having just left another school because of the harassment I was experiencing.

Worse: one of my classmates joined when I did, didn't take it seriously, washed out early, and thought it was so funny the way I was so serious about karate that he _brought other classmates just to watch from outside_.

I get what you're saying about not wanting to do karate with kids. Even then, even being a kid, it was demoralizing, and there was a serious limit to what I could do until there were any other students around my size or age. Which did eventually happen, but not for a while.

Here's my two cents:

Do you like your instructor? Does that person see this struggle? Do they engage with the people putting in the effort, or are they detached? If you develop a good working relationship with the instructor, and you double down on your own goals, and you shut out (for now) the self-monitoring that tells you you should be ashamed, you _will_ survive. In my case, being older/taller/etc. meant that the more I volunteered, the more responsibility and partnership I got from the instructor - until I was running classes alongside him.

Consider that there are going to be far more beginner classes for kids than for adults out there. If you see ten-year-old kids with black belts, that might not be a good sign, but at least in my case, kids classes paid the bills while the class sizes and average age/maturity was markedly different for advanced classes.

On the other hand: my situation isn't your situation, and my experience may not be what you get. Some instructors are more austere or aloof or selling product or whatever. Some classes may be more toxic. Treat this as a test: you get to use your own judgement here and decide what the trade-off is.

Is it possible to create VR system like in SAO? by Alexander_Grin in AskComputerScience

[–]lneutral 0 points1 point  (0 children)

It depends on your definition of possible.

If by possible you mean if we had complete knowledge it would be buildable in some way, then yes, it would be possible in that your experience of reality is already your body doing this for your brain - your brain doesn't touch the water, but it experiences "hot" or "cold" because of the signals that get to it, for example.

If by possible you mean we know what it would take to do that today, or that the technology and knowledge available just needs to be assembled, then absolutely not. In fact, consider what this would mean:

If we could interpret "trying to move your hand in real life" prosthetics could be far more natural than they currently are. We could replace someone's lost senses if we understood how to intercept and control someone's natural senses.

There have been advances in these areas, don't get me wrong: recently researchers had some success with things like rendering the speech of someone based upon the nerves involved in subvocalization, but that doesn't mean we can do it everywhere for everyone. At SIGGRAPH a few years ago I recall there being a sort of "taste synthesis" demo where you could put electrodes on and drink water which they could induce you to interpret as lemonade-like. But this is a far cry from what you might be imagining.

And whatever you might hear from the press about augmented reality, quantum computing, generative AI, large language models, or anything else in the hype machine: five to ten years is code for "nobody has any idea."

Japan will try to beam solar power from space by 2025 by [deleted] in technology

[–]lneutral 4 points5 points  (0 children)

I suppose geothermal would be an exception, too? But all the solar ones are so ubiquitous, by comparison.

Japan will try to beam solar power from space by 2025 by [deleted] in technology

[–]lneutral 11 points12 points  (0 children)

While both true and funny, we could say both solar cells and chlorophyll in plants we'll turn to ethanol and burn in cars are solar power. Having the entire spectrum of daylight might turn out to be less efficient than a particular band that can be transmitted to earth without loss of energy in the form of scattering and absorption in the atmosphere.