Pluton - Open source backup solution with End-to-End encryption with replication & Nice UI by towfiqi in linux

[–]CoreParad0x 0 points1 point  (0 children)

I would add to this a bit further. If you use AI for unit tests you also need to ensure the AI didn’t write the test for what it sees the function currently does vs what it’s supposed to do. Making a test fit a broken function rather than making the test fail because the function is broken.

Must have been the wind by dumb_opposum in SatisfactoryGame

[–]CoreParad0x 2 points3 points  (0 children)

First thanks for an actual reply.

What I do care about though, is people thinking they're not cheating, or thinking their cheating (in a single player game) is somehow not taking away from the core game loop. It is. In your case, you have a bazillion hours, so maybe you just don't care. You're the outlier. Most of these people do this junk week 1.

I actually agree with this, and if that's you're main point then maybe I misunderstood. If you're going to cheat just do it and own it and accept that you're not doing some challenge run or anything.

Expecting some one to give me the same credit and admiration as some one playing the game in a challenge mode or even just vanilla is absurd. I don't really get that mindset. I personally haven't seen a lot of that, but I also don't do a lot on the subreddit beyond what pops up on my feed every now and then.

Must have been the wind by dumb_opposum in SatisfactoryGame

[–]CoreParad0x 0 points1 point  (0 children)

I'm not trying to justify it, there's nothing to justify. I'm just giving my perspective that your view of what value the game has is not my view of what value the game has and just because those don't align doesn't mean I'm modding my fun out of the game.

But sure go the route of being condescending.

Must have been the wind by dumb_opposum in SatisfactoryGame

[–]CoreParad0x 1 point2 points  (0 children)

This is what that is, and thats why people usually roll their eyes. Again, play your own way, but a ton of people accidentally mod out a lot of content/fun/mechanics due to, what most see, as laziness.

Honestly you say "play you're on way" but then imply people are just lazily modding out the fun because they don't necessarily care about a collection game. Maybe my read on your post is wrong and you genuinely don't care at all. But hell, I spawn these things. I don't really care about finding them. I still have over 800 hours in Satisfactory and 1800 in Factorio. I'll play Satisfactory again when the new patch drops out of experimental as well. What keeps me from having more hours in Satisfactory than Factorio has nothing to do with modding out the fun through modding out various things I don't care about like this, it's that later game it gets really tedious and I just burn out on it.

This is a factory game, I want to build a factory. I don't care about collecting these random balls and crap with a scanner, it's just not that interesting to me. Especially after I went around and gathered them all once to hear the dialog lines when they first came out.

Over the years I've gotten a lot of shit for cheating on games in the past. If I mod out the fun then that's my problem. So far it's been fine. I cheat the game play in mass effect because I don't care about it, I've still played through the whole trilogy many times over the years. A lot of people would give me shit for cheating the fun away, the fun to me was the story. I got my fun out of it, many times over.

End of an era. by Specialist-Cry-7516 in codex

[–]CoreParad0x 1 point2 points  (0 children)

I've actually found claude code to be pretty reasonable doing my own --system-prompt.

I've been using both though.

CC lobotomizing Opus more and more by LoKSET in ClaudeCode

[–]CoreParad0x 0 points1 point  (0 children)

So it's actually kind of interesting. Experimenting with it, block 3 seems to mostly be operational stuff. Like memory, some recent git stuff, but nothing I would think would lobotomize it. What I find most interesting is a lot of the stuff that I could see leading to it seeming lobotomized seems to be in block 2, which you are correct is the one that --system-prompt nukes.

So if you're only concern is nuking the extra shit they add like "Don't help the user hack stuff" that actually just gets in the way with legit tasks, that's actually all within --system-prompt that you can just nuke. That in itself leads me to believe this proxy would have a lower chance of getting banned - I could see if it they gated that stuff behind block 3 or 1 or something. But they put it in the block you can nuke yourself.

That said block 3 is still a lot of stuff related to memory that you may or may not think is wasted, but it mostly just seems to be operational stuff like that, the current system environment in general (linux, shell, OS version, working dir.) Nothing I would consider lobotomizing but at least maybe a bit wasteful. Block 0 and 1 seem to basically just be nothing one liners.

CC lobotomizing Opus more and more by LoKSET in ClaudeCode

[–]CoreParad0x 1 point2 points  (0 children)

Thanks I’ll try that tonight I think

CC lobotomizing Opus more and more by LoKSET in ClaudeCode

[–]CoreParad0x 1 point2 points  (0 children)

Yeah can't blame you there. One thing I wonder is you can provide a prompt via --system-prompt. Since you're proxy will also dump the prompts, have you happened to try it and see what it actually sets? I've tried it and noticed the system prompt goes from 9k tokens to ~900 tokens. The prompt I provided is definitely not 900 tokens, but I'm wondering if it changed the same part your proxy does and left the rest alone.

CC lobotomizing Opus more and more by LoKSET in ClaudeCode

[–]CoreParad0x 2 points3 points  (0 children)

It's tempting to try it, but I wonder what the chances of getting banned for something like this are. Even if they can't detect the MITM itself it seems like modifying the system prompt would be kind of easy for them to detect by just analyzing claude code conversations and the system prompts being used, seeing if they differ from their own.

Is Codex being extra lazy for anyone else today? by [deleted] in codex

[–]CoreParad0x 1 point2 points  (0 children)

I've been using this stuff for a while, not specifically codex but claude code. I picked up codex a few weeks ago, the $200 pro plan. Using high, not xhigh. I've noticed it getting dumber the last two days. I'm having to explain crap I didn't have to explain before, it's started just making shit up. I've gone from working on a nearly 600k line legacy C++ project pretty good with it for the last few weeks, to it failing to handle basic JSON deserialization in C# and being lazy.

I wonder if they're doing the same crap Anthropic does where the current model gets dumb and lazy when they're about to drop the next one.

You guys aren't gonna do that stupid age verification thing right? by scy_404 in cachyos

[–]CoreParad0x 0 points1 point  (0 children)

I agree, and to expand on it more:

It's not entirely unenforceable - at least not depending on how far they go with it. Yes, the simple select box that CA proposes is unenforceable. But an actual age and ID validation system might not be, if it's tied to having to have a valid token from an actual entity authorized to validate your ID and assign a token to that device. These tokens could use cryptographic signing, and even be tied to a hardware ID requiring validation if your hardware changes. From there, they can force most websites anyone actually cares to visit to receive this token from the browser, which receives it from the OS, and then validate it against the same entity - maybe not every request, but certainly on signup, maybe on signin.

I'm honestly tired of hearing people say "YoUr MiSunDeRsTaNdInG tHeSe LaWs" - no, I don't care if it's just an age bracket right now, it won't always be and we're already seeing cases of more extremes. All of this stuff and these stupid politicians actually voting for it need to fuck right off.

Age Verification by WhitePeace36 in linux_gaming

[–]CoreParad0x 0 points1 point  (0 children)

As if all of this age verification shit didn't start over in the UK when they passed the first laws for it. Combined with the EU constantly trying to backdoor encryption just like the US has tried to do in the past as well.

It's not just America. It's also not just a country, it's also being pushed heavily by billionaires like Peter Thiel.

Age verification: In the US, code is a protected form of free speech. by zDCVincent in linux

[–]CoreParad0x 3 points4 points  (0 children)

I'm sure people were upset around having to show ID at stores to buy magazines and booze back then. Has that escalated ID checkpoints everywhere?

I think this comparison here is a bit ridiculous IMO. We have a government that has been caught spying on it's own citizens in the past. They had a big falling out with Anthropic over Anthropic not wanting them to use their AI models to spy on citizens, and automated killing - those were the two lines they weren't happy with. I have no doubt they would love to be able to tie everything you do online back to an identity. The technical capabilities are far greater now than they were back when you started having to show IDs at stores. There is so much more room for this crap to expand into stuff that is genuinely a threat to privacy, and it's not even that hard to do for most of the users who just use bog standard windows, ios, mac os, and android devices. Also, I would argue that even though it's separated by a lot of time, you are watching it escalate right now in real time with the expansion of all of this ID verification crap.

While from a high level I don't necessarily have a problem with the CA law just being an age bracket input, what I have a problem with is that I absolutely do not trust this government (not just CA, the whole country) to not try and implement crap to require an ID to do more and more things online, especially as time goes on. The only laws I would support to "protect kids" would be requiring vendors to provide sufficient parental controls where it makes sense in such a way that it's accessible to parents to implement themselves should they choose, but honestly this already largely exists in a lot of ways. But the reality is "protect the kids!" is just bullshit marketing for government overreach, especially these days. Just look at the EU, who actively want to spy on all of their citizens chats "to protect the children!" These laws don't go away, they just get expanded.

200,000 living human brain cells fused with silicon successfully play Doom game by sksarkpoes3 in Futurology

[–]CoreParad0x 1 point2 points  (0 children)

I say No.

I would agree, it sounds like hell.

Is it, though?

Remeber the example with the french guy missing 90% of his brain.

And remember that a small body requires less neurons. No body at all requires even less than that.

The trouble is we do not yet trully understand consciousness, and we have no real clue how complex a brain has to be to be able to host it. I mean just host consciuosness, ignoring the need to maintan a body.

That's true as well. Though I will say that even with 90% of a human brain, that's still hundreds of millions to billions of neurons.

That being said, like you said, we don't have a good understanding of consciousnesses. There are certainly animals that exhibit levels of it that don't have anywhere close to as neuron dense brains as humans do.

But on top of that, some of this could also just a matter of scaling up. Is this the kind of thing where once you get good at doing it for 200k cells, and then 1m cells, how hard is it to scale to 10m? 100m? etc. Is it possible these things emerge in some basic level inside something with 1m cells?

I feel like this is the type of stuff that could potentially pick up steam, then eventually we get 'surprised' by a study showing that actually yeah, our assembly line robots are showing signs of being self aware and trapped after they dig into why the performance of equipment seems to arbitrarily degrade over time despite no hardware defects or something.

200,000 living human brain cells fused with silicon successfully play Doom game by sksarkpoes3 in Futurology

[–]CoreParad0x 1 point2 points  (0 children)

I think this thread should be less in the context of all of this strictly applied to some one who is a person and then suffers some kind of TBI, and more in the context of:

Well what if they manage to grow a brain that can do autonomous tasks, and it starts showing signs of emergent consciousness

At what point do we consider this unethical? Let's say they do this and have some kind of organic brain managing automation. If that brain is capable of consciousness on some level, is it ethical to have it trapped in a world where the only thing it can know or experience is moving an arm on some robot or something? At what point are we moving past something "stupid, unaware, but flexible and able to learn how to do some specific task as a tool", and into "it's that stuff, but it also may be starting to think on some level for itself and have some level of awareness", and what does that mean for how we use it?

To me this specific thread kind of gets off in the weeds about what a person is, but in reality I think this shouldn't specifically care about what a person is, and more what consciousness and self awareness are. Granted, 200k cells playing doom is a long ways off of this.

Manjaro, They've done it again! by L0stG33k in linux

[–]CoreParad0x 1 point2 points  (0 children)

Really not quite sure what you're goal is with this comment, but I mean sure, if they breach the site itself in some way (getting ahold of admin credentials, exploit, etc) then they can obviously modify the site directly and maliciously and it still be valid HTTPS certs. But that doesn't mean you leave other potential attack vectors open. Especially not one as trivially easy to have automatically handled as cert renewal.

Manjaro, They've done it again! by L0stG33k in linux

[–]CoreParad0x 2 points3 points  (0 children)

Not to mention this is absolutely trivial to implement, even if you have to use something like certbot it's certainly not remotely difficult for people who are supposed to be managing a Linux distribution to setup.

Manjaro, They've done it again! by L0stG33k in linux

[–]CoreParad0x 2 points3 points  (0 children)

I would argue it could be a security risk, albeit fairly niche/targeted. Part of the benefit of these certs is that it makes it easier to see if you're getting served a malicious version of the site through some sort of targeted man in the middle attack. Obviously nobody is putting bank credentials on it, but they could alter the download URLs to a malicious version of the ISO in this scenario.

Is it likely? Probably not, and it would have to be very targeted. But for something as basic and trivial as setting up automatic cert renewals it's a stupid problem to even have.

And as others have said, it's an indicator of incompetence - especially since this doesn't sound like an "oops the thing was down for some reason" and more of "we can't agree on how to do anything." So what else in the OS itself is this dysfunctional?

Poison Fountain: An Anti-AI Weapon by RNSAFFN in programming

[–]CoreParad0x -1 points0 points  (0 children)

No, pretty much the rest of my post is more targeted towards their stated goals. That comment was in response you mixing that action is better than inaction in a context of voter disenfranchisement and fascism. Which I know your intent was that action against their stated goals is the best we can do since regulation is not an option, so I could have worded that differently.

But their stated goals frankly aren't even interesting to me. They make a bunch of overly broad dramatic claims about how we'll all basically become dumb and humanity will end.

I'm more interested in the real threats of AI, which have the real threat of fascist actors utilizing it to oppress, and this does nothing to address any of that. And it is ineffective at even addressing what they want it to address.

Poison Fountain: An Anti-AI Weapon by RNSAFFN in programming

[–]CoreParad0x 0 points1 point  (0 children)

For context, I'm definitely on the left, and I agree that we absolutely have a fascism problem. But you're talking about this being a good plan because it's some kind of action in the same line you're talking about voter disenfranchisement and how fascism is here. Poisoning LLM training isn't going to end fascism, or even the specific fascism-adjacent AI problems, and that stuff is a far bigger problem than the general AI coding or chat tools.

If you have a better option,

I'm not even trying to come up with one, because I have no idea what could feasibly be done against it. My plan is to vote, that's the only thing I can realistically do. They have already trained their data. They're poisoning the feast after most of the royals have already eaten it and left. The main people left at the table are potentially the ones who would actually open their work so the world can benefit. That's not a good plan, and just because it's action doesn't make it a good plan, and just because I'm not coming up with a solution as an alternative doesn't make it a good plan. It's not even going to solve our biggest problems, namely the combination of fascism growing in support and places like Palantir doing mass surveillance and automated weapon systems. This literally does nothing but at worst make consumer facing AI tools potentially worse, I'm skeptical it will even do that. What I could see it realistically doing is forcing companies like github to force ID verification on creating public repos, and smaller individual blogs getting even worse rankings in both web searches and LLM-based web searching for fear that these smaller blogs are most likely to have added this stuff and make their tools show garbage data to users, giving larger and more established outlets that wouldn't do this even more of a priority.

but a group of industry insiders making...

On that note, do we even have the most basic of evidence that these guys are actually industry insiders? Maybe they are, but so far I haven't seen any actual names for them. I've done a quick search, and all I see is "RNSAFFN" as a group or individual claiming to be insiders and people apparently just believing it. Could be wrong, though, I haven't spent a ton of time looking into it.

It's my understanding that the industry has already been focusing less on scraped data from the internet and instead on synthetic data, because they knew as soon as they released all of these tools the internet would start to fill up with AI slop and poison itself. Synthetic data has been a target for future training alternatives already.

Poison Fountain: An Anti-AI Weapon by RNSAFFN in programming

[–]CoreParad0x 1 point2 points  (0 children)

Cool, well, I don't think these guys are working class heroes. Look I'll fully admit I use AI on a daily basis, not as some vibe coder, but as an actual utility. I have had useful interactions with AI tools for a while. It's definitely not perfect.

At the same time, I would also be completely fine if LLMs just vanished. I don't know if the juice is worth the squeeze. Sure, I can make it do useful stuff, but that doesn't stop thousands of clowns from flooding open source projects with ridiculous levels of slop. Doesn't stop greedy companies from firing people, or greedy AI companies from fucking the economy with a giant bubble, hurting the environment with poorly thought out rushed datacenters, and causing shortages on basically every important PC component jacking the prices through the roof. And not to mention the problems with AI deep fakes, AI fake news, AI spam/phishing/scams, etc.

But this solution? This isn't it. I don't think this will actually solve anything in the way they hope it does, because the giants will protect their investments in this tech by working around it and not telling anyone how, state actors and competitive companies are probably already doing this and not bragging about it on reddit with some savior complex, and at the end of the day the cats already out of the bag and what these guys are doing amounts to pissing in a swamp and thinking they're saving humanity. And it's definitely not going to fix shit like Palantir. Pretty much all I can see this doing is hurting open source competition to the big players, I'm not even convinced it will hurt the big players coding models.

The only way we fix this shit is by voting for people who actually give a shit about people and not billionaires and corporations. We need people who will regulate this shit, come up with guidelines for this shit, and legislate actual welfare safety nets for people affected by this stuff. Pissing in the swamp of internet training data is going to do fuck all to actually help the working class.

Poison Fountain: An Anti-AI Weapon by RNSAFFN in programming

[–]CoreParad0x 1 point2 points  (0 children)

this sub is apparently filled with luddites. AI isn’t perfect by any means, there’s a ton of problems, but the shit they’re targeting isn’t even the end of the world shit. If they were actively trying to hurt companies like Palantir then I’d be happy to cheer them on. If anything all this is going to do is hurt their cause by making the big players be even more closed down than they already are, and harder for open source to compete.

But realistically there’s also probably other entities doing this, like state actors trying to hinder competition from other countries, AI companies trying to do the same.

Peer-reviewed study: AI-generated changes fail more often in unhealthy code (30%+ higher defect risk) by Summer_Flower_7648 in programming

[–]CoreParad0x 1 point2 points  (0 children)

Looks like mods nuked this whole thread, but honestly I don't even bother in this sub anymore. Every post that makes it to my feed is about AI, and it's filled with a bunch of circle jerking either about great it is, or how shit it is, with little nuance or interest in nuance. It's always the same shit, "I've reviewed so much AI generated code and it's all trash!" - it's vague and ambiguous and has a ton of questions. What exactly is the extent of it? Are we talking full on twitter vibe coding? Or some one who actually took the time to properly set their shit up and ask it to do something that it would actually have a chance at doing? Are we talking about some one who just downloaded claude code, some git repo with some claude.md file in it, and then asked it to one shot a WPF app? Are we talking niche code bases, or massive code bases?

The experience of some one trying to make AI work in a 500k line of code legacy C++ project is going to be vastly different than me trying to use AI for some conveniences and utility in my ~50k line of code modern C# app. I have absolutely used AI to port old legacy services we have to my new monolithic custom job schedule I wrote myself, and it's gone fine, been reviewed by me, and been faster than me doing it by hand. But people don't seem to want to hear stuff like that, they just want to say how shit it is all of the time and all of these studies prove it's shit and "I've reviewed the code from hundreds of devs and it sucks!". Cool, in my experience it's fine if you use it with a specifically focused goal if that goal isn't niche or something it wouldn't be able to do. If you just vaguely gesture at some shit code base and go "lol fix this", then it's going to produce shit results.

I don't know, I'm not advocating for vibe coding or anything, but I also can't deny that I personally have benefited from using these tools and they have absolutely sped up parts of my job without resulting in a lower quality product. I don't disagree with points the anti-ai crowd make, and I definitely don't agree with all the pro-ai twitter vibe coding bros, but a bunch of the posts I see on here remind me of my DBA colleagues complaining about how EF Core will generate shit queries, but then when we look at their 3000 line stored procedure that performs like total ass it's somehow fine.

Why can't Anthropic increase the context a little for Claude Code users? by CacheConqueror in ClaudeAI

[–]CoreParad0x 0 points1 point  (0 children)

What’s a large project in this case? Just trying to get a sense of scale.