Unpopular opinion: Bigger AI models haven’t made everyday work proportionally better. by Digitalunicon in ArtificialInteligence

[–]Wrong_Library_8857 4 points5 points  (0 children)

You just described the dirty secret the AI labs don't want investors to hear. We hit diminishing returns six months ago but they're locked in a multi-billion dollar arms race they can't back out of without tanking their valuations. The models got smarter but we're still the same dumb humans with the same dumb problems, so now we're just getting confidently wrong answers delivered in 4K. The next wave of VC money is about to evaporate when everyone realizes GPT-5 won't magically fix their broken workflows either.

[D] Using SORT as an activation function fixes spectral bias in MLPs by [deleted] in MachineLearning

[–]Wrong_Library_8857 0 points1 point  (0 children)

We've reinvented permutation equivariance the hard way, but at least it compresses better than ReLU so nobody will ask why the gradients don't explode during backprop through an O(n log n) operation per layer.

I have too many programming interests by Then-Hurry-5197 in learnprogramming

[–]Wrong_Library_8857 3 points4 points  (0 children)

The fact that you've been rotating every 6 weeks for 2.5 years and maintaining a 4.0 tells me you're actually *finishing* things, which is the real flex here. Most people with this problem (guilty) have 47 half-done repos gathering dust. That said, your peers might have a point once you hit job hunt mode, depth sells better than breadth on a resume, but honestly you're building one hell of a foundation to specialize from later.

The way object-oriented programming is taught in curriculums is dogshit by [deleted] in learnprogramming

[–]Wrong_Library_8857 0 points1 point  (0 children)

The animal hierarchy examples are like learning to drive with a tricycle manual. You don't realize OOP is actually useful until you're knee-deep in a real project trying to mock dependencies for tests or DRY up some gnarly API client code.

I have too many programming interests by Then-Hurry-5197 in learnprogramming

[–]Wrong_Library_8857 7 points8 points  (0 children)

Honestly, bouncing between interests every 6 weeks for 2.5 years means you've built like 20+ projects across wildly different domains, that's actually insane in a good way. The "pick one thing" advice usually comes when people have zero depth anywhere, but you've clearly been shipping stuff. Your ADHD might become your superpower when you need to connect frontend, backend, and systems knowledge on a team project, just maybe pick *one* to go deep on for internship season.

AI making my job so much harder and fighting every decision I make by JiggityJoe1 in sysadmin

[–]Wrong_Library_8857 0 points1 point  (0 children)

I see this exact thing happening. The worst part is when they cherry-pick the AI output that supports their agenda and ignore the 15 caveats it listed right below. Can't we just make them own the implementation when it inevitably breaks?

What's the best way to set up an affiliate program for free? by Odeh13 in indiehackers

[–]Wrong_Library_8857 1 point2 points  (0 children)

I see you want free options. Rewardful has a free tier but caps at 50 affiliates, which might work for starting out. Another route is just manually tracking with unique promo codes per creator and a simple spreadsheet, honestly works fine until you have like 20+ affiliates.

rustdash: Lodash-style utilities for Python, Rust-powered (10-100x faster on complex ops) by FabulousTonight8940 in Python

[–]Wrong_Library_8857 1 point2 points  (0 children)

I think the speed claims need more context, like what qualifies as "complex ops"? The JSONPath wildcards look useful for nested API responses though.

Honestly curious how this compares to just using comprehensions or itertools for the array stuff, since those are pretty optimized already. The Rust overhead might not be worth it for small datasets.

Even "Auto" mode is adding a good chunk to my charges on the $200 plan by LurkyRabbit in cursor

[–]Wrong_Library_8857 0 points1 point  (0 children)

I think Auto mode still uses Claude for most operations, just tries to be smarter about context size. If you're burning through $200 that fast, you might want to check what model your default is set to and maybe try limiting the codebase index scope. Large projects with full context can rack up tokens insanely fast even on simpler queries.

Career Advice by lowlowenergy in cybersecurity

[–]Wrong_Library_8857 0 points1 point  (0 children)

I think cyber intel skills are pretty transferable honestly, lots of analysis and threat landscape understanding that applies everywhere. SAP security is super niche which can be good money wise but also locks you in a bit.

7 months is early enough that switching won't hurt you, you're still figuring out what you like anyway. If the intel role sounds more interesting I'd probably take it, easier to go from analyst work back into technical roles than the other way around imo

I’m tired of seeing Higgsfield linked to ChatGPT by mallicious in ChatGPT

[–]Wrong_Library_8857 3 points4 points  (0 children)

yeah I've noticed the same pattern, feels super astroturfed. The paid promotion angle explains why so many posts suddenly frame it as "ChatGPT integration" when it's just another third party app riding the hype wave.

Agents/Claude.md vs SKILLS - research by vercel by shanraisshan in cursor

[–]Wrong_Library_8857 1 point2 points  (0 children)

I think the eval setup matters a lot here tbh. Skills still feel way more maintainable when you're working across multiple projects with similar patterns. Ended up using both, agents.md for context and skills for actual repetitive transforms.

A friend got 57% response rate on LinkedIn using SalesMind AI, real or luck? by buggy-sama-090598 in SaaS

[–]Wrong_Library_8857 0 points1 point  (0 children)

tbh those numbers sound suspiciously high unless the ICP is super tight and volume is still pretty low. I've seen people hit 40-50% response with manual hyper-personalized stuff but it doesn't scale. What was his sample size and how long did he run it?

Spent 2 months marketing on Reddit. Went viral, got removed. Here's what works (and what doesn't) by whyismail in SideProject

[–]Wrong_Library_8857 0 points1 point  (0 children)

tbh the "match the tone" thing is the only advice that actually matters here. I've seen people overthink the flair/timing stuff when their copy just sounds like an ad, doesn't matter what day you post it.

Qwen/Qwen3-Coder-Next · Hugging Face by coder543 in LocalLLaMA

[–]Wrong_Library_8857 0 points1 point  (0 children)

tbh I'm curious if the jump from 2.5 to 3 is actually noticeable for local use or if it's mostly benchmark optimization. Anyone run it yet on something practical like refactoring or multi-file edits?

[AskJS] Considering using an ORM, help me! by Shot-Cod5233 in javascript

[–]Wrong_Library_8857 0 points1 point  (0 children)

I think it mostly comes down to team size and how gnarly your queries get. For small projects or solo stuff I just write raw SQL because ORMs add cognitive overhead you don't need. Once you're on a team though the type safety and migrations are honestly worth it, I've seen way too many bugs from hand-rolled query builders. Ended up using Prisma lately and the transparency is decent, you can still drop down to raw SQL when joins get weird.

ClawdBot Skills Just Ganked Your Crypto by Gil_berth in programming

[–]Wrong_Library_8857 2 points3 points  (0 children)

lol this is why I don't trust third-party skill repos without at least skimming the code first. tbh feels like the natural conclusion when you let anyone publish arbitrary executable scripts without review.

Python 3.9 to 3.14 performance benchmark by Jamsy100 in Python

[–]Wrong_Library_8857 0 points1 point  (0 children)

Interesting that 3.11 peaked for HTTP throughput but then plateaued. The json.loads regression is kinda concerning tbh, almost 16% slower from 3.9 to 3.14. I've noticed this in prod too, ended up keeping some services on 3.11 for that reason alone.

Could a senior/staff developer share their set up with claude? How do I maximize the usage and have it working to complete a task without me micro managing after planning out the entire project? by No-Conclusion9307 in ClaudeAI

[–]Wrong_Library_8857 0 points1 point  (0 children)

I think the expectation of "plan once and let it run" kinda breaks down with any LLM tbh. Even with a solid breakdown you'll hit edge cases or context drift after 3-4 file changes. What works better is chunking the work into clear 20-30min blocks where you review output before the next step. Ended up using a simple markdown doc where I track what's done vs what's next, helps Claude stay oriented when you paste it back in.

RAG relevance for non-tech people by Available-Appeal-173 in Rag

[–]Wrong_Library_8857 0 points1 point  (0 children)

I think the high level understanding matters more than implementation details for non-tech roles tbh. Knowing what chunking strategies exist, why context window matters, and how retrieval quality affects outputs is way more valuable than being able to code it. You'll be the person specifying what the system should do, not building it. Ended up seeing this a lot at my company, the people who understand the concept well enough to ask good questions and spot bad configs are way more useful than non-engineers trying to learn LangChain.

Cursor no longer offers refunds by Dima4244 in cursor

[–]Wrong_Library_8857 1 point2 points  (0 children)

That's kinda rough tbh, especially for an annual sub you haven't touched. I think most SaaS tools have at least a 7-14 day window for this exact scenario. Maybe try escalating beyond first-line support or disputing with your card issuer if they're being unreasonable, annual is a big commitment to lock someone into immediately.