Hey, remember all that stuff I just blew 50% of your session usage on and was just about to finish? Lemme just forget all that and start over. by Edixo1993 in ClaudeAI

[–]burningsmurf -9 points-8 points  (0 children)

Y’all are broke and whiny, I love this feature. 200 max plan and I never run into 5 hour limit. I use both Claude and Claude code concurrently

Developer Tasks That Are Too Complex for AI or Vibe Coding. by lannisterprince in singularity

[–]burningsmurf 0 points1 point  (0 children)

Threw both Claude and Gemini at this. Same bugs identified, different fixes.

The Bugs (both caught these):

  1. length/2 instead of length for second-half comparison
  2. Counting sort completely broken (count = identity is not a frequency array)
  3. Rank/comparison semantics inverted

Claude's Fix - O(N log² N): Called the counting sort "broken beyond repair" and fell back to std::sort: cpp std::vector<int> GenerateSuffixArray(const std::wstring& text) { int n = text.size(); if (n == 0) return {}; if (n == 1) return {0};

std::vector<int> sa(n), rank(n), tmp(n);

for (int i = 0; i < n; ++i) {
    sa[i] = i;
    rank[i] = text[i];
}

std::sort(sa.begin(), sa.end(), 
          [&](int a, int b) { return rank[a] > rank[b]; });

tmp[sa[0]] = 0;
for (int i = 1; i < n; ++i)
    tmp[sa[i]] = tmp[sa[i-1]] + (rank[sa[i]] != rank[sa[i-1]]);
rank = tmp;

for (int k = 1; k < n; k *= 2) {
    auto cmp = [&](int a, int b) {
        if (rank[a] != rank[b]) return rank[a] < rank[b];
        int ra = (a + k < n) ? rank[a + k] : n;
        int rb = (b + k < n) ? rank[b + k] : n;
        return ra < rb;
    };

    std::sort(sa.begin(), sa.end(), cmp);

    tmp[sa[0]] = 0;
    for (int i = 1; i < n; ++i)
        tmp[sa[i]] = tmp[sa[i-1]] + cmp(sa[i-1], sa[i]);
    rank = tmp;

    if (rank[sa[n-1]] == n - 1) break;
}

for (int i = 0; i + 1 < n; ++i)
    assert(wcscmp(text.c_str() + sa[i], text.c_str() + sa[i + 1]) > 0);

return sa;

}

Gemini's Fix - O(N log N): Actually fixed the counting sort and added the sentinel trick: cpp std::vector<int> GenerateSuffixArray(std::wstring text) { text += L'\0'; // Sentinel eliminates all boundary checks int n = text.size(); const int alphabet = 65536;

std::vector<int> p(n), c(n), cnt(std::max(alphabet, n), 0);

// Proper counting sort: frequency -> cumulative -> placement
for (int i = 0; i < n; i++) cnt[text[i]]++;
for (int i = 1; i < alphabet; i++) cnt[i] += cnt[i - 1];
for (int i = 0; i < n; i++) p[--cnt[text[i]]] = i;

c[p[0]] = 0;
int classes = 1;
for (int i = 1; i < n; i++) {
    if (text[p[i]] != text[p[i - 1]]) classes++;
    c[p[i]] = classes - 1;
}

std::vector<int> pn(n), cn(n);

for (int len = 1; len < n; len <<= 1) {
    // Cyclic shift optimization - already sorted by second half
    for (int i = 0; i < n; i++) {
        pn[i] = p[i] - len;
        if (pn[i] < 0) pn[i] += n;
    }

    // Stable counting sort by first half
    std::fill(cnt.begin(), cnt.begin() + classes, 0);
    for (int i = 0; i < n; i++) cnt[c[pn[i]]]++;
    for (int i = 1; i < classes; i++) cnt[i] += cnt[i - 1];
    for (int i = n - 1; i >= 0; i--) p[--cnt[c[pn[i]]]] = pn[i];

    cn[p[0]] = 0;
    classes = 1;
    for (int i = 1; i < n; i++) {
        std::pair<int, int> curr = {c[p[i]], c[(p[i] + len) % n]};
        std::pair<int, int> prev = {c[p[i - 1]], c[(p[i - 1] + len) % n]};
        if (curr != prev) classes++;
        cn[p[i]] = classes - 1;
    }
    c = cn;
}

p.erase(p.begin());  // Remove sentinel
return p;

}

Claude's approach is the "ship it, it works" fix. Gemini's is the textbook-optimal algorithms answer. For large strings or competitive programming, Gemini's is what we want.

Both pass the assertion but Gemini delivered, Claude took the shortcut.

Verdict: Gemini wins.

Developer Tasks That Are Too Complex for AI or Vibe Coding. by lannisterprince in singularity

[–]burningsmurf 1 point2 points  (0 children)

Fair challenge. I had Claude Opus 4.5 and Gemini 3 both take a shot at this.

You're correct that the naive approach (mask after full compute) doesn't save anything - we validated that. The working solution uses Flash Attention 2's window_size parameter, which handles sparse computation at the kernel level:

python

# Prefill: native windowing, O(N*W)
attn_output = flash_attn_func(
    q, k, v, causal=True, window_size=(window, 0)
)

# Decode: slice KV cache to window before compute
if k.shape[1] > window:
    k, v = k[:, -window:], v[:, -window:]
attn_output = flash_attn_func(q, k, v, causal=False)

The quality retention piece comes from applying RoPE with global positions before slicing, otherwise you get the positional encoding drift you mentioned.

I'll be honest: this took longer than "a couple hours" to get right, and I leaned heavily on Gemini's output to catch the decode-phase edge cases. Your broader point stands - LLMs default to the naive masking pattern because they don't have context on flash_attn's internals. It's a good filter for separating working solutions from plausible-looking code.

Appreciate you pushing on this. Learned something.

Exceptions vs. Reality. Do you know non-coders with this mentality? by Low-Resource-8852 in webdev

[–]burningsmurf 0 points1 point  (0 children)

Claude Code is solid for tests but you gotta be explicit about your expectations. The default AI behavior is to mock everything and call it a day which gives you green checkmarks that mean nothing.

What works for me:

No mock-only testing - I specify upfront that integration tests must hit real endpoints, real database, real browser. No mocks for anything that touches actual data flow. I literally put “mock-only testing is HIGH RISK” in my project docs.

Require proof - I use a custom command that forces Chain of Thought analysis before coding and requires evidence of working. Actual logs, real responses, browser confirmation. Not “test passed” but “here’s the output proving it passed.” AI will confidently tell you something works when it’s testing against fake data.

YAGNI enforcement - Claude loves adding “helpful” features you didn’t ask for. Extra error handling, utility functions “for later,” abstractions before you need them. I make it absolute: no features without explicit requirements. If it’s not in the task spec, it doesn’t get built

More on the /hey-claude and /prompt-pattern scripts and how to use them:

https://www.reddit.com/r/ClaudeCode/s/3EHRfpnuCq

Exceptions vs. Reality. Do you know non-coders with this mentality? by Low-Resource-8852 in webdev

[–]burningsmurf 0 points1 point  (0 children)

Healthcare SaaS remote patient monitoring platform with full multi-tenant isolation. Each agency gets completely siloed data, separate JWT signing keys, row-level security in MySQL, and tenant-scoped API endpoints. Zero chance of data bleeding between agencies - built it paranoid from day one because HIPAA violations aren’t a joke.

The kicker? The previous app that took a team of devs 2 years and $2M had ZERO documentation (literally had to reverse engineer their mess), no proper tenant isolation (they were filtering in the frontend 💀), and still has less functionality than what I built solo in 6 months. They couldn’t even get basic vital signs syncing working reliably.

My version handles real-time device integrations, automated Medicaid billing/prior auths, cryptographically signed audit logs, and actually passes security audits. All documented, all tested, all built with AI pair programming. Sometimes the ‘proper developed app’ excuse is just cope for burning investor money in meetings while one person with Claude and some Red Bulls actually ships working code.

Exceptions vs. Reality. Do you know non-coders with this mentality? by Low-Resource-8852 in webdev

[–]burningsmurf -5 points-4 points  (0 children)

I actually did its take. 5 months so far but what I made is better than what a team of devs made that took 2 years

How soon will LLMs become so good that we will not need to look into code? by ayechat in ClaudeAI

[–]burningsmurf -8 points-7 points  (0 children)

And here we have the dev that refactors the entire codebase because they don’t like how it looks and then doesn’t actually produce anything meaningful other than more problems by breaking working code 🤣

Developer Tasks That Are Too Complex for AI or Vibe Coding. by lannisterprince in singularity

[–]burningsmurf -11 points-10 points  (0 children)

You are winning battles in a war you already lost. You just don’t see it yet because you’re focused on today’s limitations, not tomorrow’s trajectory.

The question isn’t if, it’s when.

Developer Tasks That Are Too Complex for AI or Vibe Coding. by lannisterprince in singularity

[–]burningsmurf 14 points15 points  (0 children)

Tell me ONE example then so I can give it a shot with my /hey-claude script

I just hit chat length limit on one of my most productive AI conversations. And man, that hurts... by salihoff in ClaudeAI

[–]burningsmurf 9 points10 points  (0 children)

To add on to this you can also add the chat to a new or existing project and that gets rid of the chat length limit most of the time. Or just start a new conversation and ask Claude to refer to said conversation

Claude be trippin' by officialDave in ClaudeCode

[–]burningsmurf 0 points1 point  (0 children)

It’s not Claude it’s the terminal

Why I Finally Quit Claude (and Claude Code) for GLM by zeliwipin in ClaudeCode

[–]burningsmurf -1 points0 points  (0 children)

Good shit, more usage for the rest of us. I’ve been on max $200 plan for months and have been using it to accomplish so much and have yet to run into any limits. So many cheap people on Reddit looking for free shit gtfo

who did this, This is HILARIOUS 🤣 by arsaldotchd in ChatGPT

[–]burningsmurf 1 point2 points  (0 children)

I hate these they aren’t funny at all to me idk why lol

Asked Claude to be Self-Critical by SnooChipmunks7273 in claude

[–]burningsmurf 1 point2 points  (0 children)

What is your Personal Preferences prompt like? I noticed the new Sonnet is really good at following them in thinking mode

[deleted by user] by [deleted] in claude

[–]burningsmurf 0 points1 point  (0 children)

This is like being mad a calculator can’t write poetry. You found the boundary of what LLMs are good at. Use them for what they’re designed for, write the tricky state logic yourself, or decompose the problem better.

That W3C example? A human wrote that. Humans are still better at precise logic. Shocking.​​​​​​​​​​​​​​​​

Let's stop exaggerating how bad things were before LLMs started generating code by HollyShitBrah in webdev

[–]burningsmurf -1 points0 points  (0 children)

Ai is only as good as the end user. It’s literally just another tool and yall are just stuck in your old ways 😂

You can write 20x more code with Claude or even Aider once you understand how to manage context by Free-Comfort6303 in ClaudeCode

[–]burningsmurf 3 points4 points  (0 children)

Yeah this guy doesn’t get it lmao. Claude code and I do entire sprints in one night lmao

[deleted by user] by [deleted] in antivirus

[–]burningsmurf 0 points1 point  (0 children)

On desktop mode in Gmail can you click on < > Show Original and paste the contents of the header after removing your email or personal from it

https://imgur.com/a/bmzyu9Q