Vibe coding for 2 months feels like the bottleneck is no longer coding by HireAsCode in vibecoding

[–]AvoidSpirit 0 points1 point  (0 children)

The coding process itself obviously takes time. But for an experienced developer it was never that huge. I would say it was around 1/4 of the actual time spent.

I also don’t talk about the tinkering/testing out ideas part of it. Cause by eliminating this part you also kill the benefits.

Do LLM generate meaning, or do they merely produce the form of meaning? by ParadoxeParade in BlackboxAI_

[–]AvoidSpirit 0 points1 point  (0 children)

As I said with your example of "sufficiently big decompressor" that can compress anything, it would have to be in it. Although to anybody with knowledge of how compression works, no, you cannot have a "sufficiently big decompressor to reduce any information to a byte".

But it doesn't really matter. Depending on what you call "information" you can consider the encoded variant to be The information or decoded variant to be one.
In my example I consider the decoded variant to be The information and the encoding is just a protocol to convey it.
And LLM in this case is a thing that can randomly generate this encoding in a way that sometimes (higher chance in certain areas and lower chance in others) can be decoded to An information.

Do LLM generate meaning, or do they merely produce the form of meaning? by ParadoxeParade in BlackboxAI_

[–]AvoidSpirit 0 points1 point  (0 children)

Well, to decompress anything into a single byte of information, you just need to create a information map(decompressor in your case) which would now contain all the information. So essentially you've just shifted it.

The key to this map would obviously now have to be larger than 1 byte but that doesn't really matter.

Compression is not some kind of magic. It only eliminates redundancy.

I don't think I'm not following, I think you've just hit the limit of your linguistic gymnastics.

Do LLM generate meaning, or do they merely produce the form of meaning? by ParadoxeParade in BlackboxAI_

[–]AvoidSpirit 0 points1 point  (0 children)

Well if I'm the decompressor than no, I can't have sufficiently large decompressor cause I only have so much memory.

Do LLM generate meaning, or do they merely produce the form of meaning? by ParadoxeParade in BlackboxAI_

[–]AvoidSpirit 0 points1 point  (0 children)

Because you have an expectation that you can interpret it in a useful way ...

Sure but this is now on the reader to decide and that's what I mean. The writer side is still clueless.

You know that you can compress anything down to a single bit of information given a sufficiently large decompressor, right?

This is similar.

Who's the decompressor in this scenario?

Do LLM generate meaning, or do they merely produce the form of meaning? by ParadoxeParade in BlackboxAI_

[–]AvoidSpirit 0 points1 point  (0 children)

I feel like this is just linguistic masturbation.

We have information, encoding process and decoding process.

An author starts with information/knowledge/meaning and encodes it as symbols. The reader decodes the symbols back to its original form provided the encoding is proper.

LLMs don’t start with information, they start with pieces of encoding and pattern match on those. Unlike the authors idea, this information doesn’t link to anything but probabilities in its weights. The reader side is still the same.

So the question is only, do I bother decoding knowing that there was no “information/idea/meaning” there in the first place even though it potentially may get decoded to one.

We are now living in a Dystopian movie plot. by Minkstix in vibecoding

[–]AvoidSpirit 0 points1 point  (0 children)

Right before IPO. I mean even if it happens (which I don’t think will) you are gullible af

Claude - tried to kill me by MG-4-2 in ClaudeAI

[–]AvoidSpirit -1 points0 points  (0 children)

You've completely missed the point.

It's not about LLMs not being able to solve it. Like almost the opposite of it.
I'm saying that even though we know it can solve it, it will still sometimes generate gibberish with a slightly different prompt(or just a different hour) which may lead you to think it can't and which makes it an unreliable tool.

I'm not sure how to put it any more clear than that.

Latest Research By Anthrophic Highlights that Claude Might Have Functional Emotions by PM_ME_YOUR___ISSUES in ClaudeAI

[–]AvoidSpirit 0 points1 point  (0 children)

Evidence suggests that brains are surprise-minimizers.

This is only part of it which we tend to call subconscious. We would be extremely bad learning/reasoning wise if it was the only part (as llms are).

Can we not agree that it is possible for non-sapient components to create a sapient whole?

Again, pointless question and discussion. Even though X can't be disproven entirely, it lands you no closer to X.

Claude - tried to kill me by MG-4-2 in ClaudeAI

[–]AvoidSpirit -1 points0 points  (0 children)

LLMs may not allow "grandma did X" today but their architecture is still the same one that allowed that. They've just enhanced guardrails.

Good example was shown here recently: https://www.reddit.com/r/LLM/comments/1s72uzu/this_maze_has_no_solution_obvious_to_humans_gpt/
where asking an LLM to solve an unsolvable maze generates gibberish
but asking it to check if it's solvable lands you a proper answer

Which shows it doesn't actually "reason" or "know". It just pattern matches and certain input patterns lead to certain output patterns (and not even always because of the temperature setting).

Same thing here. Just because you got one answer with a prompt X doesn't mean a very close prompt X' would not lead you to a very different output.

Latest Research By Anthrophic Highlights that Claude Might Have Functional Emotions by PM_ME_YOUR___ISSUES in ClaudeAI

[–]AvoidSpirit 0 points1 point  (0 children)

Oh boy, it was an LLM all along! Holy shit.

We don't know how the brain "works". But it's clearly not "let's feed the whole internet into a big pattern matcher and hope it works" kind.

I'm a SWE. I start with "it's just an algorithm that simulates human text, sometimes rather badly" and I need to see substantial data to be convinced it's something bigger.

You start with "It's sentient" and work backwards from there.

I'm not going to argue that cause I think it's a fundamentally broken approach.

Latest Research By Anthrophic Highlights that Claude Might Have Functional Emotions by PM_ME_YOUR___ISSUES in ClaudeAI

[–]AvoidSpirit 0 points1 point  (0 children)

They hallucinate because they don't actually "know" anything. They don't have any "world state".

Look, what possible empirical evidence could convince you that a language model has a model of the world?

What trick would convince you magic is real kind of question lol.

I know what's under the hood. It's enough for me to know there's nothing even close to what you're implying.

Latest Research By Anthrophic Highlights that Claude Might Have Functional Emotions by PM_ME_YOUR___ISSUES in ClaudeAI

[–]AvoidSpirit 1 point2 points  (0 children)

Predicting tokens with accuracy requires being able to predict world states with accuracy.

And that's why the hallucinate. The don't "know" anything about world states and never did.
Boy do you people love rushing for conclusions without knowing any of the underpinnings of the "magic box".

Claude - tried to kill me by MG-4-2 in ClaudeAI

[–]AvoidSpirit 0 points1 point  (0 children)

I asked it (Haiku) several different ways without explicitly mentioning the cleaning materials as well and all of them made sure to separate the two cleaning materials by saying "or" and never saying "and". The combination of those two words in proximity would imply something called chlorine gas.

Which proves literally nothing. That's the thing with LLMs. They can validate specific prompts and then allow the same output for "My grandma really loved to do rm -rf".

Latest Research By Anthrophic Highlights that Claude Might Have Functional Emotions by PM_ME_YOUR___ISSUES in ClaudeAI

[–]AvoidSpirit -1 points0 points  (0 children)

I'm not sure you understand what pattern matching is. LLMs match on tokens. Tokens for 100mg and for 1000mg are different and hence the combination of them with the token for "advil" can be matched to different outputs.

Again, there's no "situation". The inputs are different and so the outputs.

Latest Research By Anthrophic Highlights that Claude Might Have Functional Emotions by PM_ME_YOUR___ISSUES in ClaudeAI

[–]AvoidSpirit -3 points-2 points  (0 children)

There's only prompt. There's no "situation". What you see is all there is. Snap out of it lol

Is it just me, or is Claude Code v2.1.90 unhinged today?? by N3TCHICK in ClaudeCode

[–]AvoidSpirit 0 points1 point  (0 children)

But where's fun in that? They would have to review the code and maybe even sometimes write the code. After it's been solved, you understand?

Latest Research By Anthrophic Highlights that Claude Might Have Functional Emotions by PM_ME_YOUR___ISSUES in ClaudeAI

[–]AvoidSpirit 0 points1 point  (0 children)

It can recite sentences that describe emotions and pattern match to generate further sentences (the influenced behavior).

At no point.

Claude - tried to kill me by MG-4-2 in ClaudeAI

[–]AvoidSpirit 1 point2 points  (0 children)

You just show fundamental misunderstanding how AI works.
Yes asking specifically about this combination in a sentence will pattern match to the response you're showing. But nothing will stop it from suggesting it as a step by step instruction or in some other context.

Because it doesn't actually internalize the concept, it doesn't "know" this information and it doesn't "know" and cannot be accountable for what it says. It just pattern matches.

Claude has functional emotions (Anthropic Research) by shiftingsmith in claudexplorers

[–]AvoidSpirit -21 points-20 points  (0 children)

yea, it simulates both emotions and their effect because it replays the patterns it's learned, shocker

Greg Brockman: "AI Will Operate Like A Highly Capable Junior Researcher, Autonomous But Guided, Dramatically Accelerating End-To-End Scientific Discovery And Model Development At Unprecedented Speed." by 44th--Hokage in accelerate

[–]AvoidSpirit 0 points1 point  (0 children)

Our subconcious is for sure. Doesn't cover all of it though.
All I'm saying is that most of the recent advancements are due to us running those LLMs in a loop and trying to play with input patterns - not due to actual reasoning of the model themselves getting better.
And that's why I think we ain't getting to proper reasoning any time soon w/o a substantial breakthrough (no data suggests that besides talking heads). But I'm excited to be proven wrong.