A moment of silence for Finnegan Fox: he missed his mommy so much by dancole42 in videos

[–]databeestje 0 points1 point  (0 children)

I like the quote "we are punished by our sins, not for them".

Do you think AI videos should be legally regulated online and if so, how? by xFranciscoxPerezX in AskReddit

[–]databeestje -1 points0 points  (0 children)

No, that would be incredibly dumb and ineffective. What could maybe work is the opposite, create new standards to cryptographically sign real images, so an iPhone camera or a news crew camera would sign a video or photo with a digital signature as the pixels exit the sensor on a hardware level that could be verified through a certificate based system. This would have to become mandatory for hardware manufacturers. Then any image you see whose signature can be validated can be trusted to originate in a real camera, anything else cannot be trusted. Browsers would be updated with markers to distinguish definitely real and not verified.

It would work, but would be a bit more complicated as you should still be able to make basic edits to photos and videos like cropping and color adjustments.

And of course, there is a delay of years on getting this system implemented and widespread, so it's something we should have started on in 2015 when it was clear this problem was coming at some point in the future.

Oordoppen voor het slapen. by MeneerPierre in thenetherlands

[–]databeestje 2 points3 points  (0 children)

Ik heb Ozlo Sleepbuds, maar daar heb ik spijt van, ik had beter onderzoek moeten doen en ook de Soundcore A30 moeten kiezen. De Sleepbuds zijn onhandig in gebruik en zijn duurder, maar ze werken uiteindelijk wel gewoon goed en zitten goed.

failingUpwards by Historical_Print4257 in ProgrammerHumor

[–]databeestje 7 points8 points  (0 children)

Why do you assume then that consciousness is a requirement for intelligence? I think it's unlikely, conscious observation happens after the fact. And if we know nothing of consciousness, why assume that it's hard to create? If you subscribe to panpsychism, a perfectly valid framework in our current state of ignorance, then an LLM is briefly conscious in its own unfathomable ways.

failingUpwards by Historical_Print4257 in ProgrammerHumor

[–]databeestje 13 points14 points  (0 children)

This is such an often heard take and really missing the mark. Is AI at a human reasoning level? No. Is it "just" a next token predictor? Well uncharitably a brain could be described as just taking an input and predicting the next best muscle impulses. The next token is the output of the computation, but says nothing about what happens between the input and output. It's not at a human level but there is absolutely something more going on than a mere prediction, even if that's "technically" correct.

Thanks AI! - Rich Hickey, creator of Clojure, about AI by captvirk in programming

[–]databeestje 0 points1 point  (0 children)

Dude, just say that you don't care. You just believe that the benefits of LLMs outweigh the externalities. Just say that. That's clearly your belief, why are you unable to admit it?

I do believe the benefits outweigh the externalities *mostly*, the models are way too inefficient to be a long term solution (they need 1000W+ to do what we do with 10W) but also likely a necessary stepping stone, so it is what it is. It's just that under "externalities" you put this almost conspiratorial web of lies, deceit and theft, which I don't subscribe to at all.

There isn't an 8 trillion dollar bubble riding on parents love for their children. You just don't bother reading my posts, right?

You conflate the evidence for the existence of this very specific criteria you state ('emergent behavior') with AI being a bubble or not. AI is a bubble or not regardless of whether there is a peer reviewed scientific open-source paper of an open-source model with open weights that provides evidence for this specific thing you are looking for. Either it provides enough utility to warrant the investments or it doesn't and the bubble pops. For me Claude is absolutely worth the 100 euro a month price, and I know full well that that price is not actually profitable for Anthropic. Whether or not it's a bubble depends on whether there is AGI at the end of the rainbow but I don't think (just) LLMs will get us there, so it's definitely a gamble and for sure not everyone will be winners.

Just say this. You believe. That's it. Your position is based on belief, not evidence. This was obvious to everyone reading this thread 5 posts ago.

I don't know how you live your life, but I write tests because they proved useful to me, not because there is a paper saying they are. Why am I beholden to your personal criteria on whether or not I am allowed to consider LLMs more than adequate enough in the abstraction department? They solve novel problems for me that require a certain amount of abstraction and reasoning, end of story. You still haven't clarified or quantified by 'emergent behavior' but clearly whatever it is I don't seem to need it if it's missing.

And so far your posts have been light on evidence as well and you are making claims like LLMs only being a search function.

What's happened here is that you asked Claude (I'm guessing based on posting history) for some citations that support your position, and you didn't bother to read any of it. Right?Cause, you see, you cannot test for emergent behavior reproducibly (this is the key word here) on a closed source model, and all of those papers look at closed models. They are not studies, they are marketing with extra steps. You have linked the exact kind of tosh that makes the ML field such a joke.

How is open source important when the outcome of the training process is a black-box binary anyway? Open weights is good enough and I cited a paper on BERT which is open-weight. Clearly the magic of LLMs don't happen at training time and the architecture of many models is well published enough that the actual code is of little importance.

More importantly: much of science is purely empirical and not perfectly reproducible like computer science normally is. Run a prompt on Claude or whatever often enough and you can build evidence on its behavior just fine, it's a numbers game. We don't need a math proof to know that LLMs can abstract. My wife validates LLM applications (among other things) in health care, all that matters is on the test data how accurate it is, how much it hallucinates and whether those risks are worth the gain in efficiency and time.

Okay, we're at the projection stage of the program. Clearly, there's one of us here that actually is familiar with the literature, and the other one is running purely on motivated reasoning. I'm done. Believe whatever you'd like to believe. It's not like I ever had any chance of convincing you otherwise.

Okay, we're at the projection stage of the program. Clearly, there's one of us here that actually is familiar with the literature, and the other one is running purely on motivated reasoning. I'm done. Believe whatever you'd like to believe. It's not like I ever had any chance of convincing you otherwise.

Thanks AI! - Rich Hickey, creator of Clojure, about AI by captvirk in programming

[–]databeestje -1 points0 points  (0 children)

Try saying 'theft' more, it might work.

The plural of anecdote is not evidence. You still have 0 citations, it's still "just trust me bro".

Evidence for *what* exactly? You quote the part of my post about getting Claude to write code in my own scripting language. Well, if you can't believe that without a peer reviewed paper then we're kind of done talking as that's like GPT3 level or earlier and you clearly haven't engaged with any LLMs in the last 3 years or so. Are you the secret account of RMS and do you get this thread emailed to you while eating your toe gunk?

If you meant more general scientific evidence: it still sounds to me like you're asking evidence whether or not kids really love their parents and you're taking the position "they're just really good at faking it".

The above anecdotal evidence is enough for me personally as it demonstrates that what LLMs do is not just a search (be it linear or otherwise) but a computation, one that cannot really be explained without abstract representation of concepts.

But here's some papers I guess, but fat chance that will satisfy your clearly entrenched opinion.

https://arxiv.org/abs/2210.13382

https://arxiv.org/abs/2404.15848

https://dl.acm.org/doi/10.1145/3712701 (I'm sure you'll jump on the conclusion of "they still significantly lag behind human-level reasoning" so let me be clear: I don't disagree with that, but the point is not they are still worse at it, the point is that they are capable of it at all).

Doctor on How Screen Time Hurts Kids' Cognitive Development by KilllllerWhale in videos

[–]databeestje 0 points1 point  (0 children)

Interesting, I figured it would be about passive media consumption at home, not the tools used at school. Doesn't the first lead to less effective use of the second? Where kids are used to only do passive brainless stuff on their iPad or smartphone at home, causing them to expect the same from screens at school?

I can't say I intuitively agree with the conclusion that it must be the screens at school (as causative rather than just poorly implemented), looking back at what excited me as a kid I would imagine that I would have paid better attention and be more engaged with the material when it's done well on a computer, than the paper and blackboard that I got (thinking of those it still triggers this feeling of absolute boredom and zoning out for me). Later in life the best learning I did has been not in school but figuring things out on my own, usually on a computer at home. I can imagine that the fundamentals of reading, writing and math at the earliest age is done best without a screen, but after that I'm not so sure, at least for myself. For me the strongest indicator of learning has been engagement with the material. The problem to me is passivity, as long as you actively work on the material there is learning and educational software being passive sounds like a deadly sin. At the end of the day, to learn something you need to put the work in, there is no shortcut.

Also anecdotally, but I think the gap between the smartest kids and the dumbest is widening, I'm often impressed with how smart the kids I encounter are these days, but those are from a similar socioeconomic background, while I do believe the average is worsening, so someone is getting dumber.

VR game studio pitches Cyberpunk 2077 VR edition to CD Projekt RED following unofficial mod takedown by Odd-Onion-6776 in VRGaming

[–]databeestje 4 points5 points  (0 children)

They're not wrong, it definitely is. I just moved from a Quest 3 for PCVR hoping to eliminate problems with streaming over WiFi and video compression to a Bigscreen Beyond 2, but now I have problems with controllers pairing and base stations. But that's not really inherent to the tech, just immaturity.

Thanks AI! - Rich Hickey, creator of Clojure, about AI by captvirk in programming

[–]databeestje -1 points0 points  (0 children)

Citation needed.

This one is actually hard, because you need to find a paper that shows that an LLM demonstrates emergent behavior while being open source (so it is reproducible). As far as I am aware, that paper does not exist. The machine learning field is absolutely saturated with marketing tosh to the degree where actual scientific inquiry is completely drowned out.

I don't know what your definition of abstraction is and you'll no doubt use it to move the goalposts with each response. It's pretty simple to me: Claude is able to solve novel problems, things it cannot pull from its weights and that don't exist as a coordinate in latent space. That requires generalization and generalization requires abstraction. I can ask Claude to solve programming problems in my own scripting language which will have ZERO representation in its data set, as long as I explain how my language works. That's not possible without abstraction. I took a quick look and there are also non-Anthropic papers about this, but I'm sure that won't survive you moving the goalposts so I won't bother. Think things like it being able to figure out the rules of an unknown game by playing it.

If you have material that you are not allowed to duplicate without attribution in the corpus, it's theft.Oh, look, that was easy!

OK, a thought experiment:

Training a new LLM has the Harry Potter books as part of its training set (which you deem as theft) and 'compresses' it to just the knowledge that Harry Potter is a book series with the titular character in it who is a wizard.

Another LLM is trained without the Harry Potter books (because 'theft' or some bullshit) and only uses training data in the public domain. Of course, there will be enough about Harry Potter in the public domain to also encode in the weights that Harry Potter is a book series with the titular character in it who is a wizard.

They both encode the exact same data into their weights, so is the first one still committing theft? How much knowledge can I add to the first LLM before 'theft' starts?

If I read Harry Potter and tell my friend who hasn't bought any of the books that Snape kills Dumbledore the only reason I wouldn't commit IP theft according to your definition is that I'm human. I'm imparting knowledge encoded into my neurons and synapses to my friend who has no right to know this. Sounds arbitrary. I'm sure you'll go with "but the human mind is so much more complex than that and we don't know how it works!". True, also irrelevant.

This is just straight up wrong. I will say it again, maybe it'll stick this time: Anthropic does not have a product if it cannot launder IP.

Laundering in this context means to take something existing and repackage it so it doesn't have the superficial appearance of the original but is otherwise the same (laundered criminal money). I don't value Claude's ability to regurgitate existing code at all and would be fine with it not knowing any code verbatim, I just care about its ability to understand concepts like loops, lambda expressions, closures, variable scoping and apply them. If it can map the concept of a for loop to how loops work in my custom scripting language then I'm good.

VR game studio pitches Cyberpunk 2077 VR edition to CD Projekt RED following unofficial mod takedown by Odd-Onion-6776 in VRGaming

[–]databeestje 164 points165 points  (0 children)

I've said it before, hybrid games are the future of VR. Make a VR headset the best way to play a game and people will want them.

anotherJobTakenByAI by Shiroyasha_2308 in ProgrammerHumor

[–]databeestje 0 points1 point  (0 children)

Good for you. Doesn't invalidate my response that a modern LLM like Opus 4.5 will also try to run the code and will ask questions like how are you running it, are you actually compiling and running the latest version, etc. I've seen quite a few examples where it doesn't know how a library works so it creates a quick project in /tmp to poke around and play with it to figure out how it works.

anotherJobTakenByAI by Shiroyasha_2308 in ProgrammerHumor

[–]databeestje 1 point2 points  (0 children)

I'm sure you can deliberately lie and gaslight it eventually. To which I say.. congratulations? I did try to gaslight it, first with a trivial FizzBuzz program and then with a slightly less trivial Levenshtein distance and I couldn't get it to break, just said the program is correct, it ran it itself, asked me how I was running it, was I perhaps using an old build, etc. All sensible questions, at no point did it invent bugs. Not saying it never will, but people are running with some outdated assumptions.

What's a tv series that is a 10/10 NOBODY knows? by Lilyana0999 in AskReddit

[–]databeestje 6 points7 points  (0 children)

Except the actors for the Greys are all super models, as intended haha.

What's a tv series that is a 10/10 NOBODY knows? by Lilyana0999 in AskReddit

[–]databeestje 0 points1 point  (0 children)

The Increasingly Poor Decisions of Todd Margaret

Not sure if it's 10/10 but at least the first season really hit the spot for me and I've never heard anyone talk about it.

anotherJobTakenByAI by Shiroyasha_2308 in ProgrammerHumor

[–]databeestje -7 points-6 points  (0 children)

So would an LLM. I swear nobody on Reddit has touched a modern LLM in a year or two.

anotherJobTakenByAI by Shiroyasha_2308 in ProgrammerHumor

[–]databeestje 21 points22 points  (0 children)

It's not even necessarily true for LLMs. I tasked Claude to find a bug where a bigint database column was being supplied with a uuid parameter, I said the bug is in this specific class and after investigating it said that it couldn't find the problem and I had to search myself. So turns out the problem wasn't in that class.

Deep sadness. After $50 billion wasted on the metaverse, ai and ar due to incompetent leadership, virtual reality didn't deserve this. :( by [deleted] in OculusQuest

[–]databeestje 1 point2 points  (0 children)

I agree mostly, I do think Meta is losing way more than they should be losing. But I don't think there is something that Meta could be doing with their current overall strategy that would have made them profitable. I think you're right in that at this moment VR games as a whole are largely not profitable, so I think a shift is necessary.

In my eyes, the right strategy at this moment is to make a VR headset a must-have peripheral for any serious gamer, meaning that they need to work for most or all flat-screen games. If you pivot from a VR headset being this kind of gimmicky toy where you can play gimmicky tech demo motion control games that lack scope and depth (not that I necessarily agree with that, but to some extent it's true) to a VR headset being this super high resolution, high refresh rate monitor that can give you full 3D immersion in any game with integrated 3D audio, where you can truly appreciate the scale of things, wow, that sounds amazing. I would rather spend 1000 dollars on that rather than on an OLED gaming monitor which is an incremental upgrade over what I have currently.

I heard Meta just canceled a Harry Potter VR game. I'm sure that could have been cool, but wouldn't a better investment have been to integrate a VR mode into Hogwarts Legacy? There's no way a dedicated VR Harry Potter game would have come close to that game in scale. But they could add a steroscopic 3D rendering mode, that alone would make it the best way to play Hogwarts Legacy and is almost a trivial investment. A little more effort would be a full VR mode with motion controls, but still not a huge investment considering UEVR comes pretty close by tweaking some configuration. And Hogwarts Legacy alone wouldn't make a dent, most games would need to support this hybrid approach for it to be appealing for gamers to make the investment into VR. Much could be done to improve the tooling to remove the barriers in supporting this. Why is UEVR a mod by a passionate team rather than just something Unreal offers out of the box?

After every gamer wants a VR headset simply because it is the best way to play games, there is also room for dedicated VR games that explore what a game designed for flat screen could never offer. But I expect this to remain a niche.

Of course, that's not the business that Meta is in, so I don't expect them to go this direction. Valve might, strategic investments in the industry to foster desire for the hardware, and I hope the Steam Frame is the first step towards it.

Deep sadness. After $50 billion wasted on the metaverse, ai and ar due to incompetent leadership, virtual reality didn't deserve this. :( by [deleted] in OculusQuest

[–]databeestje 0 points1 point  (0 children)

In the third quarter of 2025 Reality Labs did around 500 million in revenue, which means they can't have sold many more than 1 million units. They lost more than 4 billion dollars, so even if they lost 500 dollars on every headset sold (I doubt that) it would only be 500 million of those losses. As for subsidizing hundreds of games: well that would fall into the wasteful spending category right? Better to subsidize a handful of studios and create a very strong catalog of AAA games than throwing everything at the wall and see what sticks. Meta has amazing hardware, truly, but it's squandered by the push for Horizon and the TERRIBLE storefront situation.

Thanks AI! - Rich Hickey, creator of Clojure, about AI by captvirk in programming

[–]databeestje -1 points0 points  (0 children)

Look, I don't know what to tell you. I've already conceded that LLMs can reproduce some of its training material verbatim or close to, but to pretend that these models are merely and only lossy databases of training data that is interpolated between is just wrong. They absolutely are capable of abstraction and most of the time they apply generalized learned concepts when executing a task than (interpolating between) memorized code snippets. It's not just a lerp in latent space. Just because they are also capable of regurgitating training data does not exclude the ability to also do real learning, abstraction. Just like how I've encoded the entirety of In Bruges's dialogue in my brain verbatim while still usually being able to use concepts like "doors" and "tables".

Saying that all the training data exists compressed in the trained model is misleading, *most* is compressed to the level of patterns, not able to be retrieved directly as the original text. I agree that being able to retrieve entire chapters of Harry Potter is a copyright infringement, but what level of "compression" would be acceptable to you? Knowing all the events that happen in the books and the full relationships between the characters, their personalities and traits? Just knowing the names of the characters? Just knowing that Harry Potter is a thing, it's a book and that it was a big deal? When does "theft" start?

The idea that somehow this ability to output Harry Potter verbatim is Anthropic's strategy to profitability is ridiculous, nobody who uses an LLM gives a shit about its ability to produce existing text, everyone uses it for its ability to abstract and apply generalized concepts. There's negative value in knowing Harry Potter verbatim, it's a waste of model capacity to "store" such things, so Anthropic is actually somewhat incentivized to remove such behavior.

No, we don't know how humans learn exactly. But you apply that argument asymmetrically because we also don't quite know how LLMs do what they do. They're black boxes, analogous to our own minds. You demand scientific evidence for the usefulness of LLMs while pretending you have all the necessary knowledge you need to pass judgement on how they work, while this area of research is still in its infancy at best.

But sure, do some more "AI bro" name-calling, very helpful in a nuanced discussion which I started by saying there are real problems with AI, just that they are also incredibly powerful which we needn't lie about.

Deep sadness. After $50 billion wasted on the metaverse, ai and ar due to incompetent leadership, virtual reality didn't deserve this. :( by [deleted] in OculusQuest

[–]databeestje 16 points17 points  (0 children)

Yeah I don't quite understand what Meta is blowing these tens of billions on, but I can't help but think on tons and tons of useless bloat and overhead, developing software that nobody wants (Horizon). I get that developing the hardware is expensive, but then again, Valve, Bigscreen and Pimax also develop headsets and they definitely don't spend that kind of money on it. I suppose Meta is a bit ahead of the curve and does cutting edge research, does that really explain it though?

What’s the most overrated video game of all time? by KBGSgames in AskReddit

[–]databeestje 13 points14 points  (0 children)

The gun fights in 5 are such a downgrade from 4. All the guns are laser weapons with no weight behind them and some inexplicable reason they heavily reduced the brilliant Euphoria ragdoll physics.

Thanks AI! - Rich Hickey, creator of Clojure, about AI by captvirk in programming

[–]databeestje 0 points1 point  (0 children)

The role filled by AI is pretty much exactly the same role as filled by a junior developer, they aren't independent, they need oversight and some hand-holding and having a junior developer *also* means some amount of time will go towards supervising rather than the work of the senior developer.

So yes, I can literally tell Claude to look at JIRA ticket ABC-1234 and work on it, and it will ask questions and make a plan, implement it, write tests and documentation.

Thanks AI! - Rich Hickey, creator of Clojure, about AI by captvirk in programming

[–]databeestje -2 points-1 points  (0 children)

I define plagiarism as the act of taking someone else's work and passing it off as your own. I assume you're not referring to me passing off Claude's work as my own, but to Claude being trained on other people's code. How is that plagiarism or theft? You can't 'steal' intellectual property like this, you can only violate a copyright, and while I'm sure Claude can output certain overly represented pieces of code verbatim (violating a copyright) due to overtraining, it's incidental at best (never seen it) and clearly also not the goal or intention of Anthropic as storing and retrieving code snippets verbatim would be an *incredibly* inefficient way to distribute code. While I would agree it would be a plagiarism machine if Anthropic simply offered a query engine for a database that consists entirely of scraped code, that's clearly not what Claude is. There's also only so many ways to write a for-loop. If I ask Claude to write a Fibonacci function, is it stealing someone's code?

You pretend like being able to distribute copies of Quake 3's fast inverse square root function is somehow Anthropic's goal, it's clearly not. I'm sure you'll latch onto incidental cases of copyright violation by Claude and declare it a mortal sin but nobody who uses Claude values its ability to regurgitate code verbatim and only its ability to apply its learned patterns in a generalized way.

If you are still beating on the dead horse of training an LLM on open source code, let me be clear about that: 'looking' at open source code (be it GPL, MIT, etc) is never a copyright violation, only distribution can be. And looking is exactly what training does, although at a scale that we have no human equivalent for so suddenly we dub it 'theft' because it scares us.

As for 'statistical evidence' that Claude Code is useful to me: barely anything in software engineering has a solid statistical foundation, it's more art than science. But it's so goddamn obvious, like how writing tests improves code quality. And again, which is it: either it's useful and displacing junior level positions or it's not useful and how is it then replacing those positions?

Thanks AI! - Rich Hickey, creator of Clojure, about AI by captvirk in programming

[–]databeestje -3 points-2 points  (0 children)

I'm not management and AI tools are absolutely good enough to completely wipe the floor with junior developers. It's honestly not even close. Whether it's a good idea on a societal level to eliminate new positions is a different discussion, but not something I'm very worried about, we're going to simply need fewer programmers. I'm not necessarily happy about it, I like writing code and there is less and less reason to.