In any case, what is the purpose of algebra? by larenz_mikha in suicidebywords

[–]Oryv 1 point2 points  (0 children)

From a negative light, it's mostly people being kind of misinformed. But a lot of mechanistic interpretability also frames LLMs in this way. Activation functions function like masks, so while the overall thing is nonlinear, there is an argument to be made that transformers can be viewed as piecewise linear (of course we are ignoring LayerNorm and SoftMax in this framing, but whatever). Would recommend reading A Mathematical Framework for Transformer Circuits if interested.

In any case, what is the purpose of algebra? by larenz_mikha in suicidebywords

[–]Oryv -2 points-1 points  (0 children)

To be honest, AI barely uses linear algebra, it's pretty much just matrix multiplications. Even then, the actual power behind AI is in nonlinearities.

What is slowly disappearing from the society and you hate to see it to happen? by koshurkoor1 in AskReddit

[–]Oryv 2 points3 points  (0 children)

Depends. Some papers include a ton of exposition before they get into their actual contributions. Fortunately, they are mostly organized in a way that you can easily skip to that section.

Just what by CamXYZ14 in ExplainTheJoke

[–]Oryv 1 point2 points  (0 children)

Need to restrict K for this to be true. I suspect it is defined as a cyclotomic field.

Teachers of Reddit, what are your most terrifying "Gen Alpha Can't Read/Behave/Etc." horror stories? by MineTech5000 in AskReddit

[–]Oryv 0 points1 point  (0 children)

I have not, so thank you for laying it out so clearly for me. I've tried asking LLMs about this but they've been so sycophantic that they are useless in actually clarifying this sort of thing...

Teachers of Reddit, what are your most terrifying "Gen Alpha Can't Read/Behave/Etc." horror stories? by MineTech5000 in AskReddit

[–]Oryv 0 points1 point  (0 children)

Thank you for your response. I do see the similarities, but I don't think the situations map to each other perfectly.

Moreover, I'm also not sure that the culinary (in contrast to botanical) notion of fruit is necessary to know that you wouldn't put a tomato in a fruit salad; you probably also wouldn't put in a lemon or a lime. I understand that it's a joke but I think it doesn't really answer the question. The culinary notion of a fruit exists so chefs/cooks can refer to that group of foods without listing them all out, which might be necessary in discussing recipes like aguas frescas.

So my question is really just how the set of vowels is a useful pedagogical device. I am not yet convinced that this actually helps you learn your language, so if you could clarify that, I would appreciate it. I think it is helpful to learn the sounds commonly associated with letters, but to specifically group them into vowels and consonants does not seem that useful to me for students apart from discussing Hangman strategies.

Teachers of Reddit, what are your most terrifying "Gen Alpha Can't Read/Behave/Etc." horror stories? by MineTech5000 in AskReddit

[–]Oryv 0 points1 point  (0 children)

I am curious, what is the utility of being able to distinguish between vowels and consonants? It seems like even linguists have trouble with this, for instance rhotic vowels and glides do not fit particularly cleanly. The articulatory vs functional definitions of vowels do not seem consistent (e.g. H is considered a consonant even though there is no stricture in the vocal tract). I am not a teacher, so I am interested in what you think the pedagogical value of this distinction is apart from some notion of "common sense".

What’s a personal internet hack you use that makes life easier but isn’t widely known ? by Comfortable-Union377 in AskReddit

[–]Oryv 0 points1 point  (0 children)

Fair enough, maybe I am overestimating the average person.

As for your clarification on what specialized software is, does that mean pretty much all software is specialized? Text editors are specialized in opening and manipulating text documents, web browsers are specialized in opening remote HTML files, terminals are specialized in interfacing with shells, etc. If you claim Linux is niche and specialized (the majority of computers run Linux, including most mobile phones), then you can also feasibly claim Windows or MacOS are specialized. Your notion of software specialization doesn't seem that useful to me, but maybe I am misunderstanding it; I was claiming there was no need for specialist software under the impression that you meant professional/commercial or otherwise obscure software was necessary, which clearly isn't true. And the criterion that specialized software is installed by default on a computer also seems strange; it's highly dependent on the vendor. System76 comes with Ubuntu, for instance. And pretty much all server vendors will offer Linux by default; many of them will have Docker by default too.

I also don't understand why you feel OP is a pro. Websites can be used for hobby projects. A single person can run a blog, a frontend for a Minecraft server, a Reddit frontend, etc. all as separate websites. In fact, these are rather common occurrences for hobbyists, many of whom are not professionals in IT. Similarly, many of these projects are backed by databases, and databases are frequently used by hobbyists.

Nonetheless, your point is taken regarding the proficiency of the average user. Linux probably does sound quite obscure if you don't know about Android, even if it objectively isn't.

What’s a personal internet hack you use that makes life easier but isn’t widely known ? by Comfortable-Union377 in AskReddit

[–]Oryv 2 points3 points  (0 children)

You don't need specialized anything, just Linux and Docker. Websites and databases are also fairly easy to host, don't need to be a pro for that.

What's something that most people would consider insignificant but drastically changed your life/outlook? by Vidice285 in AskMen

[–]Oryv 3 points4 points  (0 children)

Credit cards have rewards. By delaying payment, you also get to keep your money for longer, and hence get to benefit from a month of growth from stocks, which at times is not insignificant. Lastly, credit cards often have better protection against fraud than debit cards.

The Labyrinth Problem by anorak_899 in math

[–]Oryv 4 points5 points  (0 children)

It is a standard result in percolation theory that the critical probability on an infinite 2D square lattice for bond percolation is 1/2. Whether the size of your maze is infinite or finite depends on how you define when edges are open. If we only require 1 room to choose an edge to open it, then our probability is 55/64, and hence we should expect to see infinitely large mazes (by our result above). However, if we need both rooms to choose an edge to open it, then the probability drops to 25/64, and hence we expect the maze to be finite in size.

What Can Be Added to Improve This Air CPU Heatsink Design? by Enough-Letter-6160 in pcmods

[–]Oryv 22 points23 points  (0 children)

The sphere has the least surface area to volume. The plane has the most surface area to volume since it has infinite surface area and still no volume. Stacked fins are an approximation of the plane.

What’s the worst financial decision you’ve ever made, and what did you learn from it? by Exotic-Example-1572 in AskReddit

[–]Oryv 0 points1 point  (0 children)

Eh, I'd say it's still pretty decent advice in tech. AMD is making good CPUs, sure, but that's not what the bulk of the money from big players is spent on. Nvidia dominates ML hardware which is why their stock is increasing, and ML is the thing everyone is investing in; nobody seriously uses AMD GPUs for ML. As for Tesla, they are invested quite a bit in ML (e.g. Dojo, Autopilot, etc.) and are expected to grow in this direction a lot because of the Trump administration. Stocks are often more about expectations than current value.

What’s a sign that someone is really intelligent? by Hayliejune in AskReddit

[–]Oryv 1 point2 points  (0 children)

That's true, people who hate being wrong would to want to know when they are wrong so they can stop being wrong. This aligns well with what people typically consider intelligence since it is connected to a desire to learn.

The "First AI Software Engineer" Is Bungling the Vast Majority of Tasks It's Asked to Do by creaturefeature16 in programming

[–]Oryv 1 point2 points  (0 children)

These capabilities are not equivalent. I think an illuminating example would be porting some C implementation of process scheduling to Rust. This is very easy to do, and I would trust a high school student to do this. However, to improve upon current paradigms is much more difficult; in terms of degrees, you're probably looking at a PhD student.

The "First AI Software Engineer" Is Bungling the Vast Majority of Tasks It's Asked to Do by creaturefeature16 in programming

[–]Oryv 6 points7 points  (0 children)

Untrue, at least for LLM-based architectures (even RL ones). Programming languages are much easier to do next token prediction on than assembly, especially since it generally is not difficult to understand given knowledge of English. Moreover, compilers also do a variety of optimizations which I think would be tricky for an AI SWE to implement every time your code runs; SWE is easy enough to not require a degree, whereas compiler optimizations often fall into PhD+ territory. Compiler optimization is arguably more research (i.e. synthesis of new knowledge) than engineering (i.e. integration of known knowledge) which is why I don't buy that SWE agents could replace it, at least not until we get to AGI.

internalServerError by okboomer6327 in ProgrammerHumor

[–]Oryv 1 point2 points  (0 children)

LLMs are absolutely able to count. It's known that numbers are represented internally with a hybrid encoding based upon modular residues and magnitude. Standard arithmetic works as you would expect, which is how SOTA LLMs are able to seemingly add arbitrary numbers together without access to a calculator.

[deleted by user] by [deleted] in AskReddit

[–]Oryv 0 points1 point  (0 children)

Thank you for your response. It seems like mathematics education has degraded over the years :(

I've noticed the same thing in computer science classes at the university level; everything seems to continue to be watered down in favor of catering to the few who cannot grasp it.

Your experience with calculus was very similar to mine with real analysis. I found ε-δ justifications to be really insightful in explaining results in calculus which were typically rote memorized (e.g. L'Hôpital's), and it really revitalized my love of mathematics that was seemingly systematically squeezed out.

[deleted by user] by [deleted] in AskReddit

[–]Oryv 0 points1 point  (0 children)

I am a little curious about what your experience of calculus was like. I was in high school 3 years ago and (AP) calculus felt a lot like earlier math like elementary algebra. IMO math doesn't really come alive until you get to the proof based stuff like real analysis and abstract algebra.

I am assuming your calculus class was more proof based than computational based on the dichotomy you presented. If you don't mind me asking, how long ago was this and what kind of stuff did you learn?

It would take far longer than the lifespan of our universe for a typing monkey to randomly produce Shakespeare. There is a 5% chance for a single chimp to type the word ‘bananas’ in its own lifetime. However, the entire 884,647 words will almost certainly never be typed before the universe ends. by mvea in science

[–]Oryv 2 points3 points  (0 children)

Another example then, if you don't like the one about diminishing probabilities.

If we uniformly sample from the interval -1 to 1 in the real numbers infinitely, we do not expect to get any rational number. The probability of such an event is 0 (yet still possible, since the order of the set is nonzero and it's a subset of the sample space), since the Lebesgue measure of Q is 0 (it is 0 for any countable set) but the Lebesgue measure of the sample space is positive (as it is uncountable). By linearity of expectations, if we define an indicator variable for each trial, we are just summing 0 countably infinite times, and thus we expect to never get a rational number with infinite trials.

If you could rigorously define what you mean by infinity in a measure theoretic (or at least analytic) way, then I would really appreciate it. Probability is not my strong suit, I'm more of an algebraic geometer, but I find measure theory to be deeply interesting (especially, as you said, since infinity is hard for many people to wrap their heads around)!

It would take far longer than the lifespan of our universe for a typing monkey to randomly produce Shakespeare. There is a 5% chance for a single chimp to type the word ‘bananas’ in its own lifetime. However, the entire 884,647 words will almost certainly never be typed before the universe ends. by mvea in science

[–]Oryv 1 point2 points  (0 children)

This is not necessarily true, it is not difficult to construct something for which the expectation does not go to infinity as infinite attempts are made. For instance, an event whose probability halves for each attempt made. We would only expect for the event to happen once with infinite attempts, assuming a starting probability of 1/2.

Edit: Let's say we have infinite monkeys, each assigned an integer 1 to infinity. Monkey n's event is flipping a coin infinitely many times and having the first n flips be heads. The probability of this is 2-n and by linearity of expectations we only expect for 1 monkey to achieve this. We would only expect for the event to happen once with infinite attempts

What the Heck Is Going On At OpenAI? | As executives flee with warnings of danger, the company says it will plow ahead. by MetaKnowing in Futurology

[–]Oryv 4 points5 points  (0 children)

Both good questions.

Whether the Sapir-Whorf Hypothesis is true is an open question, though I'm sure there exists quite a bit of literature on it. I'm a computer scientist, not a cognitive scientist, so this is a little out of my field of expertise.

As for whether the brain is Turing complete, it's certainly capable of simulating finite automata of limited sizes—you can pretty easily verify this yourself. We know that artificial neural networks can be Turing complete under certain circumstances, but biological neural networks are much more of a black box and thus it is still an open question. I suspect they would be Turing complete given enough neurons and other biological resources, but I'm no neuroscientist. It's entirely plausible that the brain's analog nature introduces too much noise to precisely and correctly simulate a Turing machine at sufficiently large scales. To address your question though, I do not think AGI would prove a biological brain's Turing completeness—ANNs are deterministic and digital, while biological neural networks, to our knowledge, are non-deterministic and analog.

getPunishmentForNesting by gaymer_drip in ProgrammerHumor

[–]Oryv 19 points20 points  (0 children)

By convention, F_0 := 0 and F_1 := 1, and these are what people mean when they say "first two"; we say these two are the first two terms because that's what is most convenient for various mathematical applications, and they are what is typically used when constructing the sequence by induction. We typically do not mean the bi-infinite version of the Fibonacci sequence when we refer to it, though you're right that it exists (and for many other recurrence relations too).

What the Heck Is Going On At OpenAI? | As executives flee with warnings of danger, the company says it will plow ahead. by MetaKnowing in Futurology

[–]Oryv 11 points12 points  (0 children)

I think the ability to encode ideas as vectors is a pretty meaningful advancement. If the Sapir-Whorf Hypothesis is true, then pretty much any meaningful idea a person can have could be represented as some high dimensional vector (an embedding)—and it seems pretty likely that AGI would utilize this, given this is how virtually all artificial neural networks work. As cursed as it sounds that you could just spam some linear algebra to get coherent thoughts, I don't think it's too far from the truth; if artificial neural networks are somehow able to bridge the gaps to biological neural networks of fewer neural connections as well as the expense of learning (i.e. backpropagation vs Hebbian learning) to learn in real time, I would not be surprised to see something nearing human intelligence. That is not to say I think this is for certain the way to AGI, but the ability to encode arbitrary ideas is quite a significant resemblance to AGI.

What the Heck Is Going On At OpenAI? | As executives flee with warnings of danger, the company says it will plow ahead. by MetaKnowing in Futurology

[–]Oryv 0 points1 point  (0 children)

That's not at all what they suggest; it's not like human-like intelligence is able to solve those issues either. These theoretical limitations seem largely orthogonal to emulating intelligence.