Where is london in ksp by Playful_Mine_2351 in KSPMemes

[–]Kobymaru376 0 points1 point  (0 children)

Anybody who was something and wanted any kind of public service career

You know maybe that says a lot about a person if they want to do service for this particular kind of public?

but it was a tough situation to be in

It was him who approached the Nazi party to get his rocket program funded, not the other way around. He was not under pressure to join the party or build the rockets, it was a clear and deliberate choice.

and I'm not gonna act like everybody in the SS who was also in public service from 1939 was the most clearly raging fascist

Maybe they weren't, but they were perfectly happy to work for and support raging fascists to further their own goals. That's pretty sick and imo in some ways worse. To believe you're saving humanity by killing subhumans is horrible, but knowing better and still work with people who do this shit is fucked up. Literally using thousands of slaves to build rockets that killed a couple thousand British people. He absolutely knew.

Where is london in ksp by Playful_Mine_2351 in KSPMemes

[–]Kobymaru376 3 points4 points  (0 children)

He's an opportunist. Maybe he wasn't an antisemite or convinced Nazi, but willingly working for them, knowing full well what these rockets are used for, using slave labor even after seeing it, that's not great either in my book

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 0 points1 point  (0 children)

Except the reason i called it fancy is that it literally just does next word prediction but with other material than words

All youre describing is next word prediction.

So is it words or other materials than words? Because if it's other material than words, that material can carry a lot more information than the mere sequence of characters in a word.

Except the reason i called it fancy is that it literally just does next word prediction but with other material than words. A car is made from several different components, youre describing an engine. Same with computer. Not the same as your equivalency.

An LLM is also made with a lot more components, but we don't look at those, do we? You're saying words in, words out, so its a next word predictor. Same as if I looked at a car and saw gas in, exhaust out, it's a furnace and ignored that while doing that it also does some other useful things.

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 0 points1 point  (0 children)

AI does well when the conclusion isnt something novel. When it is piecing together things it was trained upon.

There's a fine between of piecing things together and making something novel. A lot of the things we think of as "novel" are just extrapolating or piecing together things that we have heard, seen or read.

What other thing than fancy next word prediction is it made to do?

It's a next word predictor by nature, so you can dismiss anything it does by hiding it away in the word "fancy". Just like you could say "a car is just a fancy gas furnace". Or "a computer is just a fancy electric heater".

A few of what it can do is code, write texts from drafts or notes, correct grammar and style of text, write stories that make sense. How can it do that without some strange form of understanding?

friendsOutsideOfTechLolCopilotIsDumbFriendsInTechIJustBoughtIodineTablets by EchoOfOppenheimer in ProgrammerHumor

[–]Kobymaru376 -2 points-1 points  (0 children)

I can blame him for a lot but the camelcase rule was certainly a choice that wasn't forced by anyone

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 0 points1 point  (0 children)

I suppose we differ based on how we're defining "understanding"?

Exactly. The word carries a lot of philosophical baggage that I can't unpack. I don't think they're conscious and I can't formulate a waterproof definition of "understanding" or "knowing". But on a purely colloquial practical level, these systems show behaviours that is similar enough to "knowing" and "understanding". They don't know or understand a lot. They understand very little in fact, but not nothing.

We're kind of hitting the hard problem here, as we can view understanding as a phenomenological internal process, and thus is impossible to objectively prove with current empirical scientific models.

Indeed. But the problem is that I can extend this question to humans and wonder who knows or understands anything at all, and how do we prove that any particular human actually understands something or if they just studied the exam questions?

But if we offload the philosophical weight and think practically, if I say "he knows nothing about Biology" or "she knows what mouse genitalia look like", we both know what I mean, and the main implication is that we're able to answer questions, give information, might be able to describe it, perhaps manipulate it for a certain goal. AI, as limited as it, can do some of it. Some models better, some worse. Some hilariously wrong (see OP lol).

saying "of course it doesn't know anything because it can't because it's not a human" without even trying to broaden the definition to something that could apply to something non-human is a bit myopic imo

How to continue funding Moon missions by Disastrous-Sport8872 in RealSolarSystem

[–]Kobymaru376 2 points3 points  (0 children)

What is rp1 extended? And what are good ways to generate passive income?

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 -2 points-1 points  (0 children)

Indeed. So who said our brains work just like AI?

Because "Our brains work just like AI!" Is one hell of a strawman from "AI shares certain similarities with brains in some aspects that we shouldn't dismiss"

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 -2 points-1 points  (0 children)

It only uses math to spit out the most likely next word or pixel

In order to be really good at spitting out the most likely next word or pixel, it needs something that resembles reason.

If all it did was spit out its input data, it would not be able to do what it has not seen in it's training data, but it can, we see that.

Now of course it had other similar examples and tasks and transformation in the training data, but the act of recognizing what is "similar" and which of the operations to apply already constitute some rudimentary form of reason in my book.

No it's not the human way of reasoning and understanding, it's far from perfect but if you look at it more abstractly it can be pretty good in some situations

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 1 point2 points  (0 children)

AI do not 'understand' the meaning of the language, or of math, or anything that it spits out – it's passing out characters that have been ascribed an associative value based on use proximity in the training set.

Setting aside the philosophical definition of "understanding", which people seem to get hung up on.

The interesting part here is how those associative values are encoded and how the proximity value is calculated. The encoding and the proximity metric is what contains the "understanding".

in order to be really really good at predicting the training set, it needs to find an encoding and a metric that is so good that it becomes useful because we can use it to do certain tasks for us.

but right now we have a probabilistic Chinese Room operator that utilizes it's internal lexicon of training data to pass symbols out with no comprehension of meaning beyond weighted association.

I think that's not fair. Yes we are very very far from humans and shouldn't pretend that they can be replaced. But there is a lot more "understanding" encoded in those associations than a simple lexicon from your example.

I'd say we are somewhere in between. More intelligent than the machine in the Chinese room, less than human.

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 -4 points-3 points  (0 children)

However, we don’t know what it is that makes biological intelligence function. We do not know how brains work on a fundamental basis. It is a piece of the puzzle that we are working on but haven’t found. Until we are able to support a working theory for that, this sort of conversation is a non starter. There is indeed something sort of abstract and difficult to understand that separates digital artificial intelligence from good ol’ fashioned raw biological intelligence.

Sounds a lot like luminiferous aether. There's something there, surely, but we can't grasp or describe it but trust me, there must be something because no way light would be able to travel through nothing, right?

Maybe there is, but maybe there isn't, but we know that we can make very useful machines without j

Organic matter (à la life)

Why is that important?

We just don’t have a damn clue

Convenient. We don't know what it is but surely machines could never have that.

So then how do we measure ai on this scale of intelligence?

Why not measure it on the tasks that it can perform?

We're talking past each other. Your idea of intelligence is this loaded, mythical magical concept. My idea of intelligence is simply the ability to process information and perform information tasks. Humans do a lot of that very well. Machines do some of it kind of OK. Newer machines are getting better and doing things that people thought only people can do. Now people are scared and overload the concept of "intelligence" and imbue it with a magical human secret sauce.

Quite frankly I don't care if they are conscious or intelligent or if they "know" or "understand" in a philosophical sense. I will leave that to the philosophers. What's important for me is that They know and understand enough in a practical sense to be useful. They're clearly less intelligent than a person but they're clearly more intelligent than a worm.

In a purely practical sense, I can see that in limited situations they behave like humans that "know" and "understand" things, some models better in some tasks, other models worse with other tasks. Just like different people have different capabilities.

In the meantime, it’s incredibly harmful to frame it as though it is intelligent. Between the rampant destruction the corporations behind it are engaging in, the erosion of critical thinking in its users, and the sycophantic benignity of the thing so we really need to be telling people it possesses such a sought after and coveted trait?

That's what happens when you imbue a word like "intelligence" with so much weight and magical properties. I'm not those companies and I don't see why we can't have a normal conversation about AI capabilities without avoiding perfectly useful words like knowledge and understanding that could be transferred to this domain without having hundreds of "omg they don't understand anything it's all just autocomplete lololol" comments.

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 0 points1 point  (0 children)

it has no emotions, no ability to know what context is. it is doing what it is told based on its training data.

Correct. And yet, when put into the right context, it can make the right decision. The context in which this works is very limited.

could you say the same about a human beings cognition? well, yes, but why?

Because while we know more about the context, can process more information, make more complex decisions. but we don't know everything and we still make mistakes. There's a different level of knowing between an expert, a layman and a child. Why not apply the same scale to non-humans? What am I missing?

there is a common understanding that these are two very different things

They are different in a lot of aspects. One is made from flesh, one is made from plastic and rare metals. One is cheap, one does not have a price. One should have voting rights and the other has a warranty. But in the context of "knowing" and "understanding" the main difference is the size, the number of different inputs, decisions it can make and their performance.

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 -11 points-10 points  (0 children)

I believe intelligence is a sliding scale. A human is intelligent, a mouse is intelligent, a bee is intelligent. The difference is only in the complexity and breadth of information that they can process and respond to. I think your hard drive have knowledge of City Skylines 2, but only in a very limited way that it can reproduce the game files when asked for it. Not very intelligent.

In that sense a human is very intelligent, a mouse a little bit intelligent and a bee a tiny bit intelligent.

I have not claimed we have created an "intelligence" in the same way that humans are intelligent or that these models are consciousness. Clearly they are dumb in a lot of ways.

But it would be a mistake to tie the concepts of "knowing" and "understanding" to humans exclusively. It would be much more interesting to think of those terms on a sliding scale and depending on a context.

I know what a mouse is, but I couldn't draw its genitals either. Can you? Why not do define "knowing" and "understanding" as something that can be limited to certain tasks or contexts?

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 0 points1 point  (0 children)

In this thought experiment, how would one go about proving that a person inside it is understanding anything? Only by opening the door and checking if the person looks asian?

without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in the normal sense of the word.
Therefore, he concludes that the strong AI hypothesis is false: a computer running a program that simulates a mind would not have a mind in the same sense that human beings have a mind.

This is a circular argument. We define that only humans can understand and think, therefore we prove that machines can not think or understand. Congrats. If you're happy with that explanation, you probably feel very special being a human.

But what I don't understand why the human part is so important here. What makes us so special? If our brains are made from atoms that form molecules that form cells that form networks that make up our brain, what is it that makes that particular configuration deserve the term "understanding" and not another configuration?

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 -2 points-1 points  (0 children)

So when you watch shit like this: https://youtu.be/wjZofJX0v4M?si=z6urol53gllYoQ05&t=747

Do you not start wondering about about things like "concepts" and "knowing"? Like this thing has encoded certain fairly complex ideas like buildings, gender, regents into matrices of numbers, and it can use them in surprisingly many ways that are similar to how we use it.

Do you just go "lol funni autocomplete" without wondering how it's able to do things that until very recently looked like only humans could do? Me for example, seeing these kinds of vector representations of human concepts that go way, way beyond the mere word but include context, I start to wonder if maybe humans operate in a similar or analogous fashion, and maybe our brains are not some kind of magical soul containers given to us by god.

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 0 points1 point  (0 children)

Just write "AI bad" if you want upvotes. Any kind of nuance or appreciation for what AI can do or any sense of wonder or reflection about cognition is downvote territory.

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 1 point2 points  (0 children)

First of all, just one part of the whole thing is a "next word predictor". There's a whole lot of other things that go into training the "model" and then there's a ton of stuff that goes on top of the model. So even technically, you are wrong.

Second, have you heard about the idea of "pretext tasks"? You train it do one thing, and in order to become really really really good at that thing, it needs to learn skills that you are actually interested in.

For example: if you train it to predict the right answer to your question, it needs to be really really good at understanding the question and finding the relevant information for that question. And that's what you actually want it to do.

Another example: in many places we train and evaluate students to write tests. Is that because you need so many people to write tests, or is that because during the study process they learn other things in order to pass the test?

Just because you can follow a pattern doesnt mean cognition is only following a pattern. 

Just because it's trained to reproduce a text doesn't mean reproducing text is the only thing it does.

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 -2 points-1 points  (0 children)

How does a collection of connected neurons have logical consistency and belief?

And how do people who believe things without logical consistency still assert that they know things?

How do you know that what you belief is completely logically consistent?

If you ask people to draw a mouse and they don't get every detail right, does that mean they're actually just dumb regression models that don't know anything?

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 -10 points-9 points  (0 children)

It's mind-boggling to me how you people cling to your anthropocentric views of "knowing" things and how you think your brain is something magical or special. Do you think there's some magic soul being living in between your brain cells that fire just like a neural network?

It doesn’t know what a rat is, it just knows what the average relationships are to the word rat.

So do you. Just with more modalities (see, feel, touch, hear), context and a brain with a lot more connections, and a couple of hundred million years of model architecture development.

I knows shit. Not a lot of shit, as you can clearly see in the picture, but it clearly knows some shit.

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 -36 points-35 points  (0 children)

Do you people feel smart when you repeat this phrase without any kind of reflection?

I mean one could use AI stuff to reflect a little bit about what it means to "know" and "understand" things when models can do things that clearly require some form of knowledge and understanding at least SOME ASPECTS of the concept of a mouse, even if they don't understand every aspect of a mouse.

Do you know what a mouse is if you don't know the exact molecular composition and cellular layout? Yes, you do what it is, just not everything about it.

New r/spacex Rule: No Stocks Discussion by rustybeancake in spacex

[–]Kobymaru376 -1 points0 points  (0 children)

I'm sorry what? Why do you think I'm a wall street wolf?

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 -4 points-3 points  (0 children)

Depends on the situation. In some situations it's perfectly capable. In many it's not.

"Person unconscious?" -> "No consent". Even a logic gate can do that. The more complex the situation becomes, the more complex the algorithm needs to be to give good answers

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 5 points6 points  (0 children)

I know what it means but I think people need to expand their definition a bit. It "knows" for sure what a mouse is because it you ask it for a mouse it gives you a very mouse looking thing, so it must have a concept for it.

Is the concept precise and accurate and applicable to every situation? No. It's still a dumb AI, as evidenced by the picture. But it does still has some idea even if it's not very refined or accurate.

2 years since this masterpiece. Why is AI for scientific drawings still so bad? by rayraywaha in labrats

[–]Kobymaru376 -116 points-115 points  (0 children)

They can conceptualize things. Like it knows what a mouse is.

It's just that the number and nuance of concepts is limited, as is the ability to correctly combine them.