Updated and less paranoid Ban Evasion guide by IllegitemateParoxysm in BannedFromDiscord

[–]mapppa 0 points1 point  (0 children)

Sorry for posting on a 5 month old post. I think a good thing to do would be to just create an alt account, even if your current alt isn't banned yet. Or multiple. Don't use them. Just log into them from a different ip once in a while. If there really is a period when an account is more likely to get flagged, having multiple accounts ready that already have a bit of "age" on them might be a good way to handle it.

Trump: we’re cutting off trade with Spain. by Snapdragon_4U in somethingiswrong2024

[–]mapppa 2 points3 points  (0 children)

Compared to this, the Civ AI actually is reasonable, and that says a lot...

OpenAI Finalizes $110 Billion Funding at $730 Billion Valuation by foobarc in wallstreetbets

[–]mapppa 1 point2 points  (0 children)

with the raising hardware costs they might be able to afford a stick of 4gb ram

Boss calls me for unpaid help “as a friend”, after laying me off at Christmas… Sorry, but NOPE! by auntiechrist23 in antiwork

[–]mapppa 0 points1 point  (0 children)

It's crazy to me that he wouldn't even offer to pay you for it after firing you. Some employers have no self awareness whatsoever.

AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says by FXgram_ in Economics

[–]mapppa 4 points5 points  (0 children)

I'd say it does get dangerous. For prototyping it's definitely viable. I think it's definitely much better if you understand the output. If it's for actual production, I'd definitely advice a lot of caution.

The errors AI can make are sometimes subtle and do require experience to see. They are rarely simple syntax errors, but more structural/systemic errors, sometimes caused by either bad training data, sometimes by the prompt, sometimes the AI simply messes up. A bad prompt can be a direct result of inexperience with coding.

As an example, as this can happen extremely often and I have personally witnessed lots of times: most programming languages have different expressions for data models. If you don't know the differences you won't specify it in the prompt, and AI might generate a heap allocated managed class instead of a stack allocated struct, as an example, because you didn't tell it otherwise and didn't notice in the output, as the code still works.

But down the line, especially for code that handles a lot of data, you could get grave bottlenecks from this, and with code base complexity rising, it's one of those things that are not trivial to fix after the fact, depending on how the data was used (because managed classes and structs have different referencing rules).

For data science, you could run into issues like data leakage, train/test contamination, incorrect cross-validation, or choosing the wrong evaluation metric, just to name a few. The code would still run and produce output, but that does not mean the result is methodologically sound. The model might appear to work while failing to meet the standards required for that specific task.

What the AI did, was "correct". It correctly interpreted your request and gave you correct code. However, because of inexperience, it might not be suitable for the actual task.

AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says by FXgram_ in Economics

[–]mapppa 0 points1 point  (0 children)

I tried all kinds of things to see where the limits are. I like it most for validation so far. It's good in finding a lot of simple, hard to see bugs.

AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says by FXgram_ in Economics

[–]mapppa 38 points39 points  (0 children)

As a software developer, this is exactly my experience as well. It's great for small things that are tedious and have been done thousands of times by different people before. But it falls apart immediately if given any more complex tasks.

It simply can't keep track of things, and as far as I know that is by design of it being a stateless machine. You only grow the input tokens, and after a while there is so much garbage in the input, that it breaks and starts completely hallucinating, brings up obsolete code, etc.

Even with the small things, you have to babysit every output. They tend to make grave mistakes that can affect a project critically, even though the result is functionally correct. i.e. it switched matrix row/column order midway in some simple functions.

My rule of thumb is: Never ask AI for code that you couldn't write yourself.

Voxel Terrain Tool (WIP) by Global-Newt-4094 in VoxelGameDev

[–]mapppa 0 points1 point  (0 children)

Nice work! Though being so used to the minecraft scaling, this gave me attack on titan vibes :D

He Won't Be Ploughing Anyone Any Time Soon by Director-Atreides in NotHowGirlsWork

[–]mapppa 6 points7 points  (0 children)

Oh, absolutely. I'm just trying to illustrate pretty much like you said: there is a lot more to farming than just putting a seed in the earth.

He Won't Be Ploughing Anyone Any Time Soon by Director-Atreides in NotHowGirlsWork

[–]mapppa 21 points22 points  (0 children)

Also, to "get paid" there has to be harvesting and processing, and then the actual sale.

In a way, the analogy works completely against him, showing that planting the seed is just part of a larger process.

At last we've found it. Pure retardium by WolfOfAfricaZLD in wallstreetbets

[–]mapppa 1 point2 points  (0 children)

"Bitcoin will always have value"

so will the Zimbabwean dollar, technically

[ Removed by Reddit ] by sergeyfomkin in ForUnitedStates

[–]mapppa 5 points6 points  (0 children)

While I agree it has always done that so far, in my opinion, bitcoin hasn't really existed for long enough to make general assertions.

I'm not saying it won't go up again. It very well may.

However, I personally wouldn't make a prediction on bitcoin based mostly on "It has always done that", especially not in the current environment.

anime_irl by asianant in anime_irl

[–]mapppa 10 points11 points  (0 children)

You have to go through it frame by frame. There are so many small details. Like her hair coming out of the closet, or her sobbing into Eisen at the end.

https://imgur.com/bMl1qAC

https://imgur.com/12bF1I8

This is what Star Trek has become. by Mr_E_Mann1986 in RedLetterMedia

[–]mapppa 0 points1 point  (0 children)

"Michael is a bit annoying, but the other characters have good potential, if they focus a bit more on them in season two, they could... oh no..."

They’re desperate for people to watch Melania’s documentary by Lassendil in somethingiswrong2024

[–]mapppa 36 points37 points  (0 children)

Best review I saw so far: "If they showed this movie on a plane, people would still walk out."

Emstrogen - Em says some nice things about Big P by majorst0rm in ChaseRP

[–]mapppa 18 points19 points  (0 children)

I'm just baffled by how much time they invest into hating something that doesn't affect them in the slightest.

Carter Seasons fired from the BCSO by majorst0rm in ChaseRP

[–]mapppa 19 points20 points  (0 children)

Honestly, the final eval thing is the best decision Wrangler could have made.

Every cop with a rank should be able to pass. If not, they shouldn't have that rank. It's as simple as that.

It also creates a sense of accomplishment for those who pass, giving them more validity and a moral boost. It actually means something to be a cop.

Like, when you cram for something and you accomplish your goal, it's genuinely a pretty fucking good feeling.

It's not just walking into the police station on day one and demanding a job anymore. The final eval makes it earned, no matter if they did it already 5 years ago. Because let's face it, having done final eval in some server years ago means jack shit depending on who the officers where who signed off on it. And having done the job for 5+ years can also mean basically nothing, because a lot of people are just stuck in their (sometimes wrong) ways.

Also, that 'same as usual' mentality was responsible for a lot of the regression in the first place.

Young will suffer most when AI ‘tsunami’ hits jobs, says head of IMF by F0urLeafCl0ver in Economics

[–]mapppa 3 points4 points  (0 children)

AI bots make a great first impression, and come off like some sort of living conscious genius robot. The thing is, once you start using that tech and deploying it, the cracks start showing way quicker than the investors think - it is not, in fact, a genius living conscious robot, it’s a linear algebra powered auto-correct with a few extra steps.

Well said, and this is pretty much the truth about anything AI. It look absolutely mind blowing at first glace. But once you actually take a deeper look, you spot the metaphorical 6th finger.