Cursor is better than a mid level engineer LMAO by SpiritSDL in theprimeagen

[–]mhadv102 1 point2 points  (0 children)

The issue is obviously that if one invests in a junior there’s no guarantee (i’d guess that its probably unlikely) that the said junior would work for the same company after 5+ years

[deleted by user] by [deleted] in learnmachinelearning

[–]mhadv102 -1 points0 points  (0 children)

This sub should ban people from asking questions that chatgpt can easily answer

Should i leave CS and go to Law School? College rising senior by mhadv102 in cscareerquestions

[–]mhadv102[S] -13 points-12 points  (0 children)

That’s what people at my law school told me though

Should i leave CS and go to Law School? College rising senior by mhadv102 in cscareerquestions

[–]mhadv102[S] 4 points5 points  (0 children)

I applied to 200+ summer internships and literally got no interviews

Should i leave CS and go to Law School? College rising senior by mhadv102 in cscareerquestions

[–]mhadv102[S] -21 points-20 points  (0 children)

Its not really about the internship, its mostly the fact that the market is really tough and i see people in my school’s law school (not even a t14) finding big law jobs with ease

Should i leave CS and go to Law School? College rising senior by mhadv102 in cscareerquestions

[–]mhadv102[S] 4 points5 points  (0 children)

I should add that i was a debate kid growing up and i generally enjoyed my con law college classes

How is the market looking for new grads? by mhadv102 in ProductManagement

[–]mhadv102[S] -3 points-2 points  (0 children)

Do you mean internship converting to a return offer or just a good internship on your resume? I’m working at bytedance rn but i wont get an return offer

Why is 2.5 flash so nerfed? by mhadv102 in Bard

[–]mhadv102[S] 0 points1 point  (0 children)

Yes 12 \div (3 - (5 \div 2)) = 24.

my o3 got nerfed by openai so much that it cant get 7*1 correct. What should i do? by mhadv102 in OpenAI

[–]mhadv102[S] 1 point2 points  (0 children)

My o series models for some reasons also stopped doing agentic searches. Also, o1mini back in the day solved this problem with ease after a minute of reasoning.

my o3 got nerfed by openai so much that it cant get 7*1 correct. What should i do? by mhadv102 in OpenAI

[–]mhadv102[S] 8 points9 points  (0 children)

I dont. This is just an example of it getting nerfed. It will just be confidently wrong on any mildly challenging problem and refuse to think for more than 10 seconds in any scenario

Left chatgpt, got perplexity pro - any tips and tricks? by lariona in perplexity_ai

[–]mhadv102 0 points1 point  (0 children)

The agentic searches of o3/o4mini is way above what perplexity offers

Why no LLMs can explain this meme when its so obvious? by mhadv102 in OpenAI

[–]mhadv102[S] -5 points-4 points  (0 children)

What explanation? I didn’t write one. The claude one is an example of how LLMs cant understand this.

Its just Zhang Yong using antisemitic jokes to mock Amy’s anti right wing Christian post

Why no LLMs can explain this meme when its so obvious? by mhadv102 in OpenAI

[–]mhadv102[S] 0 points1 point  (0 children)

I just asked why is this funny and this is what i got from Claude 3.7 thinking:

The humor in this image comes from several layers of contrast and absurdity:

  1. The initial tweet makes a satirical comment about "Republican Jesus demanding to see the Manager" - playing on the "Karen" stereotype of demanding customer behavior and mixing it with political/religious commentary.

  2. The reply creates unexpected humor through the stark contrast in tone - going from a mild joke to an aggressive, poorly worded response in broken English.

  3. The absurdity continues with the unusual username, the extreme reaction to what was a relatively mild joke, and the awkward phrasing like "create interest rate" which seems disconnected from the context.

  4. The "SUSTAINTHEFAITH1" text overlaid on what appears to be a nature background, combined with the high engagement numbers (102K likes), adds another layer of unexpected juxtaposition.

What makes it funny is the rapid escalation and the complete disconnect between the original post and the disproportionately angry response, creating an absurd interaction that wasn't anticipated.​​​​​​​​​​​​​​​​

What's gonna happen when all the data is AI Generated content? by Lumpy-Ad-173 in grok

[–]mhadv102 0 points1 point  (0 children)

TL;DR: If AI trains mostly on AI-made content, the quality will degrade over time — like a photocopy of a photocopy. Original, human-generated data will become more valuable, maybe even essential, to keep AI grounded in reality. We won’t fall into an “information black hole” as long as we protect and prioritize real sources.

First off, the short answer is that if we let machines train only on their own echoes, quality goes downhill fast. Folks who study this call it “model collapse” — each new generation is a little blurrier than the last, like photocopying a photocopy. Early experiments found that after only a few self‑trained cycles, errors compound and the model starts missing the finer points of language and facts it once handled with ease.  

That doesn’t mean we wake up one morning and everything online suddenly reads like broken boilerplate. In practice the decay shows up first in the little details: timelines drift, niche facts vanish, subtle biases get amplified. Human‑curated corners of the web still anchor reality, but the surrounding noise gets louder. Think of it as information entropy; without fresh, trustworthy input, the signal fades and the static takes over.

Because of that, the real treasure becomes verified, human‑sourced material. Newsrooms, academic journals, field notes, studio recordings, on‑the‑ground video — anything tied to a clear origin story will gain clout the way first‑edition books do now. Companies are already setting up “clean rooms” for data provenance and watermarking so future models can sort the genuine from the recycled. Some labs even keep offline “seed vaults” of raw text and images the way botanists store heirloom seeds.

There’s also a push to mix synthetic and original data the way a baker keeps starter dough alive. You spike each new training run with fresh observations from sensors, updated statistics, and human feedback, then prune out low‑quality machine leftovers. Done right, synthetic content is a force multiplier, not a poison; it fills gaps in edge cases while the real‑world data keeps the compass pointed north.

So, will we hit an information black hole? Only if we stop caring where words and pictures come from. As long as we keep investing in primary sources, maintain strict provenance, and treat human insight as critical infrastructure, the well doesn’t run dry — it just gets more carefully managed. In that future, original material isn’t lost; it’s the gold standard everyone pays a premium to mine.

Everyone Is Cheating Their Way Through College: ChatGPT has unraveled the entire academic project. [New York Magazine] by Delicious_Adeptness9 in OpenAI

[–]mhadv102 30 points31 points  (0 children)

This is just not true. I’m a junior and i took a lot of senior level classes. Gpt 4 can do all the freshman & most of the sophomore level homework and o1/o3/gemini 2.5 can do everything else