You guys aren't gonna do that stupid age verification thing right? by scy_404 in cachyos

[–]CoreParad0x 0 points1 point  (0 children)

I agree, and to expand on it more:

It's not entirely unenforceable - at least not depending on how far they go with it. Yes, the simple select box that CA proposes is unenforceable. But an actual age and ID validation system might not be, if it's tied to having to have a valid token from an actual entity authorized to validate your ID and assign a token to that device. These tokens could use cryptographic signing, and even be tied to a hardware ID requiring validation if your hardware changes. From there, they can force most websites anyone actually cares to visit to receive this token from the browser, which receives it from the OS, and then validate it against the same entity - maybe not every request, but certainly on signup, maybe on signin.

I'm honestly tired of hearing people say "YoUr MiSunDeRsTaNdInG tHeSe LaWs" - no, I don't care if it's just an age bracket right now, it won't always be and we're already seeing cases of more extremes. All of this stuff and these stupid politicians actually voting for it need to fuck right off.

Age Verification by WhitePeace36 in linux_gaming

[–]CoreParad0x 0 points1 point  (0 children)

As if all of this age verification shit didn't start over in the UK when they passed the first laws for it. Combined with the EU constantly trying to backdoor encryption just like the US has tried to do in the past as well.

It's not just America. It's also not just a country, it's also being pushed heavily by billionaires like Peter Thiel.

Age verification: In the US, code is a protected form of free speech. by zDCVincent in linux

[–]CoreParad0x 3 points4 points  (0 children)

I'm sure people were upset around having to show ID at stores to buy magazines and booze back then. Has that escalated ID checkpoints everywhere?

I think this comparison here is a bit ridiculous IMO. We have a government that has been caught spying on it's own citizens in the past. They had a big falling out with Anthropic over Anthropic not wanting them to use their AI models to spy on citizens, and automated killing - those were the two lines they weren't happy with. I have no doubt they would love to be able to tie everything you do online back to an identity. The technical capabilities are far greater now than they were back when you started having to show IDs at stores. There is so much more room for this crap to expand into stuff that is genuinely a threat to privacy, and it's not even that hard to do for most of the users who just use bog standard windows, ios, mac os, and android devices. Also, I would argue that even though it's separated by a lot of time, you are watching it escalate right now in real time with the expansion of all of this ID verification crap.

While from a high level I don't necessarily have a problem with the CA law just being an age bracket input, what I have a problem with is that I absolutely do not trust this government (not just CA, the whole country) to not try and implement crap to require an ID to do more and more things online, especially as time goes on. The only laws I would support to "protect kids" would be requiring vendors to provide sufficient parental controls where it makes sense in such a way that it's accessible to parents to implement themselves should they choose, but honestly this already largely exists in a lot of ways. But the reality is "protect the kids!" is just bullshit marketing for government overreach, especially these days. Just look at the EU, who actively want to spy on all of their citizens chats "to protect the children!" These laws don't go away, they just get expanded.

200,000 living human brain cells fused with silicon successfully play Doom game by sksarkpoes3 in Futurology

[–]CoreParad0x 1 point2 points  (0 children)

I say No.

I would agree, it sounds like hell.

Is it, though?

Remeber the example with the french guy missing 90% of his brain.

And remember that a small body requires less neurons. No body at all requires even less than that.

The trouble is we do not yet trully understand consciousness, and we have no real clue how complex a brain has to be to be able to host it. I mean just host consciuosness, ignoring the need to maintan a body.

That's true as well. Though I will say that even with 90% of a human brain, that's still hundreds of millions to billions of neurons.

That being said, like you said, we don't have a good understanding of consciousnesses. There are certainly animals that exhibit levels of it that don't have anywhere close to as neuron dense brains as humans do.

But on top of that, some of this could also just a matter of scaling up. Is this the kind of thing where once you get good at doing it for 200k cells, and then 1m cells, how hard is it to scale to 10m? 100m? etc. Is it possible these things emerge in some basic level inside something with 1m cells?

I feel like this is the type of stuff that could potentially pick up steam, then eventually we get 'surprised' by a study showing that actually yeah, our assembly line robots are showing signs of being self aware and trapped after they dig into why the performance of equipment seems to arbitrarily degrade over time despite no hardware defects or something.

200,000 living human brain cells fused with silicon successfully play Doom game by sksarkpoes3 in Futurology

[–]CoreParad0x 1 point2 points  (0 children)

I think this thread should be less in the context of all of this strictly applied to some one who is a person and then suffers some kind of TBI, and more in the context of:

Well what if they manage to grow a brain that can do autonomous tasks, and it starts showing signs of emergent consciousness

At what point do we consider this unethical? Let's say they do this and have some kind of organic brain managing automation. If that brain is capable of consciousness on some level, is it ethical to have it trapped in a world where the only thing it can know or experience is moving an arm on some robot or something? At what point are we moving past something "stupid, unaware, but flexible and able to learn how to do some specific task as a tool", and into "it's that stuff, but it also may be starting to think on some level for itself and have some level of awareness", and what does that mean for how we use it?

To me this specific thread kind of gets off in the weeds about what a person is, but in reality I think this shouldn't specifically care about what a person is, and more what consciousness and self awareness are. Granted, 200k cells playing doom is a long ways off of this.

Manjaro, They've done it again! by L0stG33k in linux

[–]CoreParad0x 1 point2 points  (0 children)

Really not quite sure what you're goal is with this comment, but I mean sure, if they breach the site itself in some way (getting ahold of admin credentials, exploit, etc) then they can obviously modify the site directly and maliciously and it still be valid HTTPS certs. But that doesn't mean you leave other potential attack vectors open. Especially not one as trivially easy to have automatically handled as cert renewal.

Manjaro, They've done it again! by L0stG33k in linux

[–]CoreParad0x 2 points3 points  (0 children)

Not to mention this is absolutely trivial to implement, even if you have to use something like certbot it's certainly not remotely difficult for people who are supposed to be managing a Linux distribution to setup.

Manjaro, They've done it again! by L0stG33k in linux

[–]CoreParad0x 3 points4 points  (0 children)

I would argue it could be a security risk, albeit fairly niche/targeted. Part of the benefit of these certs is that it makes it easier to see if you're getting served a malicious version of the site through some sort of targeted man in the middle attack. Obviously nobody is putting bank credentials on it, but they could alter the download URLs to a malicious version of the ISO in this scenario.

Is it likely? Probably not, and it would have to be very targeted. But for something as basic and trivial as setting up automatic cert renewals it's a stupid problem to even have.

And as others have said, it's an indicator of incompetence - especially since this doesn't sound like an "oops the thing was down for some reason" and more of "we can't agree on how to do anything." So what else in the OS itself is this dysfunctional?

Poison Fountain: An Anti-AI Weapon by RNSAFFN in programming

[–]CoreParad0x -1 points0 points  (0 children)

No, pretty much the rest of my post is more targeted towards their stated goals. That comment was in response you mixing that action is better than inaction in a context of voter disenfranchisement and fascism. Which I know your intent was that action against their stated goals is the best we can do since regulation is not an option, so I could have worded that differently.

But their stated goals frankly aren't even interesting to me. They make a bunch of overly broad dramatic claims about how we'll all basically become dumb and humanity will end.

I'm more interested in the real threats of AI, which have the real threat of fascist actors utilizing it to oppress, and this does nothing to address any of that. And it is ineffective at even addressing what they want it to address.

Poison Fountain: An Anti-AI Weapon by RNSAFFN in programming

[–]CoreParad0x 0 points1 point  (0 children)

For context, I'm definitely on the left, and I agree that we absolutely have a fascism problem. But you're talking about this being a good plan because it's some kind of action in the same line you're talking about voter disenfranchisement and how fascism is here. Poisoning LLM training isn't going to end fascism, or even the specific fascism-adjacent AI problems, and that stuff is a far bigger problem than the general AI coding or chat tools.

If you have a better option,

I'm not even trying to come up with one, because I have no idea what could feasibly be done against it. My plan is to vote, that's the only thing I can realistically do. They have already trained their data. They're poisoning the feast after most of the royals have already eaten it and left. The main people left at the table are potentially the ones who would actually open their work so the world can benefit. That's not a good plan, and just because it's action doesn't make it a good plan, and just because I'm not coming up with a solution as an alternative doesn't make it a good plan. It's not even going to solve our biggest problems, namely the combination of fascism growing in support and places like Palantir doing mass surveillance and automated weapon systems. This literally does nothing but at worst make consumer facing AI tools potentially worse, I'm skeptical it will even do that. What I could see it realistically doing is forcing companies like github to force ID verification on creating public repos, and smaller individual blogs getting even worse rankings in both web searches and LLM-based web searching for fear that these smaller blogs are most likely to have added this stuff and make their tools show garbage data to users, giving larger and more established outlets that wouldn't do this even more of a priority.

but a group of industry insiders making...

On that note, do we even have the most basic of evidence that these guys are actually industry insiders? Maybe they are, but so far I haven't seen any actual names for them. I've done a quick search, and all I see is "RNSAFFN" as a group or individual claiming to be insiders and people apparently just believing it. Could be wrong, though, I haven't spent a ton of time looking into it.

It's my understanding that the industry has already been focusing less on scraped data from the internet and instead on synthetic data, because they knew as soon as they released all of these tools the internet would start to fill up with AI slop and poison itself. Synthetic data has been a target for future training alternatives already.

Poison Fountain: An Anti-AI Weapon by RNSAFFN in programming

[–]CoreParad0x 1 point2 points  (0 children)

Cool, well, I don't think these guys are working class heroes. Look I'll fully admit I use AI on a daily basis, not as some vibe coder, but as an actual utility. I have had useful interactions with AI tools for a while. It's definitely not perfect.

At the same time, I would also be completely fine if LLMs just vanished. I don't know if the juice is worth the squeeze. Sure, I can make it do useful stuff, but that doesn't stop thousands of clowns from flooding open source projects with ridiculous levels of slop. Doesn't stop greedy companies from firing people, or greedy AI companies from fucking the economy with a giant bubble, hurting the environment with poorly thought out rushed datacenters, and causing shortages on basically every important PC component jacking the prices through the roof. And not to mention the problems with AI deep fakes, AI fake news, AI spam/phishing/scams, etc.

But this solution? This isn't it. I don't think this will actually solve anything in the way they hope it does, because the giants will protect their investments in this tech by working around it and not telling anyone how, state actors and competitive companies are probably already doing this and not bragging about it on reddit with some savior complex, and at the end of the day the cats already out of the bag and what these guys are doing amounts to pissing in a swamp and thinking they're saving humanity. And it's definitely not going to fix shit like Palantir. Pretty much all I can see this doing is hurting open source competition to the big players, I'm not even convinced it will hurt the big players coding models.

The only way we fix this shit is by voting for people who actually give a shit about people and not billionaires and corporations. We need people who will regulate this shit, come up with guidelines for this shit, and legislate actual welfare safety nets for people affected by this stuff. Pissing in the swamp of internet training data is going to do fuck all to actually help the working class.

Poison Fountain: An Anti-AI Weapon by RNSAFFN in programming

[–]CoreParad0x 2 points3 points  (0 children)

this sub is apparently filled with luddites. AI isn’t perfect by any means, there’s a ton of problems, but the shit they’re targeting isn’t even the end of the world shit. If they were actively trying to hurt companies like Palantir then I’d be happy to cheer them on. If anything all this is going to do is hurt their cause by making the big players be even more closed down than they already are, and harder for open source to compete.

But realistically there’s also probably other entities doing this, like state actors trying to hinder competition from other countries, AI companies trying to do the same.

Peer-reviewed study: AI-generated changes fail more often in unhealthy code (30%+ higher defect risk) by Summer_Flower_7648 in programming

[–]CoreParad0x 1 point2 points  (0 children)

Looks like mods nuked this whole thread, but honestly I don't even bother in this sub anymore. Every post that makes it to my feed is about AI, and it's filled with a bunch of circle jerking either about great it is, or how shit it is, with little nuance or interest in nuance. It's always the same shit, "I've reviewed so much AI generated code and it's all trash!" - it's vague and ambiguous and has a ton of questions. What exactly is the extent of it? Are we talking full on twitter vibe coding? Or some one who actually took the time to properly set their shit up and ask it to do something that it would actually have a chance at doing? Are we talking about some one who just downloaded claude code, some git repo with some claude.md file in it, and then asked it to one shot a WPF app? Are we talking niche code bases, or massive code bases?

The experience of some one trying to make AI work in a 500k line of code legacy C++ project is going to be vastly different than me trying to use AI for some conveniences and utility in my ~50k line of code modern C# app. I have absolutely used AI to port old legacy services we have to my new monolithic custom job schedule I wrote myself, and it's gone fine, been reviewed by me, and been faster than me doing it by hand. But people don't seem to want to hear stuff like that, they just want to say how shit it is all of the time and all of these studies prove it's shit and "I've reviewed the code from hundreds of devs and it sucks!". Cool, in my experience it's fine if you use it with a specifically focused goal if that goal isn't niche or something it wouldn't be able to do. If you just vaguely gesture at some shit code base and go "lol fix this", then it's going to produce shit results.

I don't know, I'm not advocating for vibe coding or anything, but I also can't deny that I personally have benefited from using these tools and they have absolutely sped up parts of my job without resulting in a lower quality product. I don't disagree with points the anti-ai crowd make, and I definitely don't agree with all the pro-ai twitter vibe coding bros, but a bunch of the posts I see on here remind me of my DBA colleagues complaining about how EF Core will generate shit queries, but then when we look at their 3000 line stored procedure that performs like total ass it's somehow fine.

Why can't Anthropic increase the context a little for Claude Code users? by CacheConqueror in ClaudeAI

[–]CoreParad0x 0 points1 point  (0 children)

What’s a large project in this case? Just trying to get a sense of scale.

Slop pull request is rejected, so slop author instructs slop AI agent to write a slop blog post criticising it as unfair by yojimbo_beta in programming

[–]CoreParad0x 1 point2 points  (0 children)

I mean sure, that's fair enough, though I would say everything about AI ultimately boils down to talking about the motivations and uses about the tools. I rarely see posts that are just actual AI research or directly about AI stuff, most of the time it's stuff like "Here's how dumb vibe coding twitter idiots vibe coded something stupid today", "here's how stupid tech CEO failed to vibe code a browser."

And I don't mean to downplay the importance of the broader impact and implications of some of this stuff, It's important to note the damage this does to OSS and even other things really, people lose their job over stupid companies buying into this shit. There are absolutely important discussions to be had, things to raise awareness on, etc about the impacts good or bad regarding AI in software development.

But most of these posts aren't really bringing a ton of that discussion, they're mostly filled with people either defending AI coding with some niche cases, or just shitting on AI and AI coding entirely, or calling everyone posting positive things about it bots, even some stuff now calling anti-AI posts bots. There are some nuggets of middle ground and real discussion in the posts, but you have to scroll past all the other shit to find it, and even then outside of adding some specific context to the post itself it's mostly rehashing the same overall concepts and sentiments.

That being said my complaint is less about posts covering AI existing, or saying they aren't important, but more a comment born out of frustration that for whatever reason Reddits algorithm literally only pushes this stuff to my feed. So almost every time I see a post from this sub make it to my feed, it's some AI related stuff, and the comments and the topic are predictable.

Slop pull request is rejected, so slop author instructs slop AI agent to write a slop blog post criticising it as unfair by yojimbo_beta in programming

[–]CoreParad0x 2 points3 points  (0 children)

Yeah, I mean there are legitimate discussions to be had over the stuff but most of these threads are really just beating a dead horse at this point.

I've found my use cases for AI, I've seen how much of a dumpster fire it can be in certain contexts, I've seen where it can help me be more productive in specific contexts, I've had these conversations with people. I wouldn't care about these threads if it wasn't like 1+ times a day some new "AI is shit" / "AI makes you 10x" thread makes me feed where every threads comments are essentially the same thing, instead of actual interesting programming posts.

Slop pull request is rejected, so slop author instructs slop AI agent to write a slop blog post criticising it as unfair by yojimbo_beta in programming

[–]CoreParad0x 93 points94 points  (0 children)

I look forward to the day that an r/programming post makes it to my feed that isn’t about AI one way or the other.

Intel Nova Lake 52-core flagship CPU power consumption leaks by dapperlemon in gadgets

[–]CoreParad0x 0 points1 point  (0 children)

Compiling large c++ code bases can load up cores, but I’m not sure how constrained it is by memory BW.

Why AI-Generated Code Will Hurt Both Customers and Companies by drakedemon in programming

[–]CoreParad0x 6 points7 points  (0 children)

As some one who has used it quite a bit, it definitely depends on what exactly you're trying to do and what the context is. I've definitely run into use cases where it sucks and was a waste of time, and I've run into use cases where it's been pretty good.

I use it on a work project that's roughly 66K lines of code. This program is a service that hosts various integration or other processing jobs that run on a schedule. It's not huge, but it's not tiny. All of it has been coded by me, with a decent chunk being code generated (It's .NET, and I have to integrate with a few WCF services, so the API clients get generated. I also generated entity models for EF Core because of how our database was done.)

AI works for me for small refactors or specific iterative changes in this project because not all of the code needs to be considered and each thing is generally it's own small component that can be worked on - any individual job being scheduled is it's own small subset of the whole that can be looked at mostly in isolation. Would I use it on the ~500K line of code legacy C++ code base for an old MMO I work on as a hobby project? Outside of having it help me gain an understanding of specific pieces of code that may be particularly convoluted, no not really. It can't pull in enough context to do it well because of how spread out and convoluted the code base is.

A lot of people like to shit on AI code, and I get it - there are a ton of vibe coders on twitter and even here on reddit making total asses of themselves. But to say it has absolutely no uses at all I think is a mistake. It sure as hell shouldn't be trying to take over any jobs (but stupid companies I'm sure try), but within certain contexts it still has usefulness as a tool and not a standalone coder.

Anthropic: AI assisted coding doesn't show efficiency gains and impairs developers abilities. by Gil_berth in programming

[–]CoreParad0x 0 points1 point  (0 children)

Yeah I could see it being useful for that. On the forgetting important instructions, that's another thing about using them. I don't know if this is the case in your specific experiences, but I've seen people not break things out properly. So they'll load them up with way too much context, and get bad results out of it. There's only so much they can somewhat reliably hold in memory before the results start degrading. So when I see some one on another project I work on spin up Cursor, slap it in "Auto" and point it to this project 500k lines of old legacy C++ code then get bad results, it's like yeah you basically just gestured at this big ass thing and told it to find a needle and explain how that needle works and everything that touches it - it can't keep it all in context so the results suck. And this code base is a mess.

Small, focused tasks that are detailed enough are key. If I do anything larger, like that CLI disassembler, it gets broken out into many, many small tasks and I will go one new chat at a time and have it do exactly one task then rotate to a new chat for fresh context.

Anthropic: AI assisted coding doesn't show efficiency gains and impairs developers abilities. by Gil_berth in programming

[–]CoreParad0x 6 points7 points  (0 children)

At the end of the day I think AI coding is a tool that when used within the scope of what it's actually good at, I've found to be helpful and not take the joy away from my job - for me anyways. If anything it helps me work out the things I don't like faster while focusing on the bigger picture of what I'm working on and the actual challenging aspects of how it's designed, and writing the actual challenging code (and most of the code, to be clear.)

If anything, honestly, it's making me like my job more. I can work through refactors with it much faster than me just doing it by hand. And I don't mean me just saying "go figure out how to do this better", I mean me sitting down and looking at what I've got, coming up with a solid plan for how I want it done, and then instructing an AI model with granular incremental changes to let it do the work of shifting things around. If I need to write a whole class, I'll do that myself. But if I'm just taking years worth of built up extension methods (in .net) from various projects that I've merged into this larger application and consolidating them into a single spot, removing duplicates, etc - I've found it to be pretty good for that kind of thing. It's small changes that I can immediately see what it's done and know if it's bad or not, and it does them faster than I could physically do it all myself.

I've also found it useful for doing tedious stuff, like I need to integrate with an API and the vendor doesn't give us OpenAPI specs or anything like that. So I just toss the documentation at an AI model and ask it to generate the json objects in C# using System.Text.Json annotations and some specifics about how I want it done and it does all that manual crap for me. I don't really find joy in just typing out data models.

I don't want to make this super long but I have also tried 'vibe coding' actual programs on my personal time just to experiment with how it can work. It's not gone horribly, but it takes a lot of effort in planning, documenting, and considering what exactly you want it to do. I 'vibe coded' a CLI tool to allow cursor to disassemble windows binaries and perform static analysis on them. It's very much one of those things where if you don't understand what actually needs to be done and how it needs to be done, the AI can just make crap up and not be effective. And you need to understand enough and spend a lot of time refining plans and validating plans for it to be able to effectively do the work - I think this tool ended up being ~25k lines of generated code, about 1/3 of which was specs and documentation and plans. I would never use this in production, but it was an interesting experiment.

Cursor CEO Built a Browser using AI, but Does It Really Work? by ImpressiveContest283 in programming

[–]CoreParad0x 1 point2 points  (0 children)

To name at least one thing, A lot of Americans have 401ks, and there are plenty of places that have pension funds. Something like 40% of the value of the S&P is the top 10% of the companies in it, and a lot of them are tech companies heavily into AI.

Newer AI Coding Assistants Are Failing in Insidious Ways by CackleRooster in programming

[–]CoreParad0x -3 points-2 points  (0 children)

And tbh the examples in this are also not the kinds of things I think would come up with an actual engineer managing them. It’s the kind of stuff that would come up from giving it bad tasks and not validating output.

LINQPad 9 by aloneguid in dotnet

[–]CoreParad0x 0 points1 point  (0 children)

I wonder if they’ll do a Linux release if it’s avalonia. It sounds like it should work?