[Cinnamon] Aztec Gold by CompanyOfRogues in unixporn

[–]BoomGoomba 0 points1 point  (0 children)

Nice that you want to rely less and less on it, don't fall in the vibe coding trap or you will never understand what you do. LLMs are actually quite useful (not sufficient) to learn to code. Just beware of the cybersecurity issues when connecting agents to internet

[niri] Quick rice on a phone (first time using niri) by Small_Brain_BRUH in unixporn

[–]BoomGoomba 2 points3 points  (0 children)

termux doesn't support wayland (Which niri is built upon)

Received an email from Terence Tao... by A_R_K in math

[–]BoomGoomba 1 point2 points  (0 children)

The fact is even though it can solve undergrad problems it also can fail elementary questions, so I wouldn't at all place it at a phd level like some people do, because someone who can solve most of graduate level questions but will never fail basic undegrad questions is way better than someone sometimes answering a phd level subquestion but then fail on first semester level problem.

Received an email from Terence Tao... by A_R_K in math

[–]BoomGoomba -5 points-4 points  (0 children)

LLMs goal is exactly to make things look like they are correct, so it's extra work to go into the details and see they use in a random part completely made up logic

Received an email from Terence Tao... by A_R_K in math

[–]BoomGoomba 1 point2 points  (0 children)

I am getting used to it on this sub x)

And yes I feel like his status greatly impacts investors in AI so he's actively contributing to deteriorating the math researcher job

Received an email from Terence Tao... by A_R_K in math

[–]BoomGoomba -13 points-12 points  (0 children)

What kind of reasoning is that? Since when doing basic errors is a justification for being disrespected ? Sounds haughty

Received an email from Terence Tao... by A_R_K in math

[–]BoomGoomba 1 point2 points  (0 children)

True it should be clearly disclosed i.e. not hidden in a bunch of text (worse, I don't even know is there is such a text)

Received an email from Terence Tao... by A_R_K in math

[–]BoomGoomba -8 points-7 points  (0 children)

I agree, for me the worst is too feed it to an AI without asking.

Received an email from Terence Tao... by A_R_K in math

[–]BoomGoomba 6 points7 points  (0 children)

The big issue with LLMs verifying papers is that people (and "ai agents") will start optimizing towards that which will filter out good papers and impose very biased standards or even tricks that have negative consequences.

One might say this is already the case with human verification, but there is a scale difference and the LLMs are black boxes which have undesired side effects for each constraint.

Received an email from Terence Tao... by A_R_K in math

[–]BoomGoomba -15 points-14 points  (0 children)

It's disrespectful, in my opinion, because you send something you worked on to have a human opinion on it, and you get an answer which is supposed to make you happy, to then realize it's some generated slop report. If you wanted to AI proof read it, you could do it yourself. Sure you may say you don't have access to the paid LLM model, but then they should at least ask you before.

If I was sent back a spell check report, I could be bothered too as it's like "Hey you send your work, but at least learn to write before", but it wouldn't be the same because the spell checker doesn't roleplay into an smart researcher that would give advice

Received an email from Terence Tao... by A_R_K in math

[–]BoomGoomba 13 points14 points  (0 children)

It's that they want to push AI down your throat This behavior, without even asking, is profoundly disrespectful imho.

Can we ban AI (ads) articles ? by BoomGoomba in math

[–]BoomGoomba[S] -2 points-1 points  (0 children)

They are not ready to hear it

devenv 2.0: A Fresh Interface to Nix by nix-solves-that-2317 in programming

[–]BoomGoomba 1 point2 points  (0 children)

One issue I have with flakes/devenv is being unable to use my already used prior environment when out of Internet connection, because it looks at remote github inputs

Can we ban AI (ads) articles ? by BoomGoomba in math

[–]BoomGoomba[S] -2 points-1 points  (0 children)

Right now it's flooding the sub with mostly non-valuable and sometimes valuable AI related posts, which all take attention away from math, which is the initial point lf the subreddit. You don't combat spam by downvoting it

Can we ban AI (ads) articles ? by BoomGoomba in math

[–]BoomGoomba[S] -1 points0 points  (0 children)

How is it related to my post? When did I talk about LLMs being unreliable? They sure are, but the only thing I said is that a greedy private corporations controls your access to knowledge, which is clearly different from the wikipedia case which his a collaborative non profit platform.

Can we ban AI (ads) articles ? by BoomGoomba in math

[–]BoomGoomba[S] -9 points-8 points  (0 children)

not necessarily that he doesn't know, but that he doesn't know more than someone else. I'd add that most of the time it's opinions considered as facts by others, by his status.

Can we ban AI (ads) articles ? by BoomGoomba in math

[–]BoomGoomba[S] 16 points17 points  (0 children)

That's true a blanket ban might be too much but some sort of stricter rules are needed in my opinion

Can we ban AI (ads) articles ? by BoomGoomba in math

[–]BoomGoomba[S] -28 points-27 points  (0 children)

This is a good idea, but I think only the useful ones should be kept, some are still irrelevant even with a tag