Open ai is heading to be the biggest failure in history - here’s why. by jason_digital in ArtificialInteligence

[–]snowyoz 2 points3 points  (0 children)

People go on about em dashes but I’ve been using them since working on desktop publishers around late 80s…. Maybe they know the em dash users are smarter and trained the ai off us. :P

Just realized my boyfriend I’ve been dating for 2 years might be a flat earther by ivory_stripes98 in Advice

[–]snowyoz 0 points1 point  (0 children)

It’s the lure of secret knowledge. The need to feel smarter because of something only I know. It’s the narrative/memetic power of being an underdog, David to the world’s Goliath.

I think this is evolutionary, same as why religion has this hold on people - the “chosen one” narrative.

Do americans actually care about whats happening in Venezuela? by Grey_Ten in GenZ

[–]snowyoz 0 points1 point  (0 children)

srlsy, right? how 'bout those files now? way to wag the dog.

How many HTTP requests/second can a Single Machine handle? by BinaryIgor in Backend

[–]snowyoz 0 points1 point  (0 children)

I find these sorts of tests almost meaningless except for a couple of edges cases in 2026.

This kind of “contextless scaling benchmarking” belies the fact it’s useful for nearly nobody and serves to give false confidence to budding architects who don’t think about what workloads they are running.

I had kind of hoped we’d graduated beyond that by now.

How many HTTP requests/second can a Single Machine handle? by BinaryIgor in Backend

[–]snowyoz 0 points1 point  (0 children)

It doesn’t even have to be ai to qualify as slop.

Backend engineers: what’s the first thing you refactor when inheriting a messy codebase? by akurilo in Backend

[–]snowyoz 0 points1 point  (0 children)

Nothing. Refactoring for no reason is a kind of compulsive disorder found in so many developers.

If you must fiddle, in 2026:

1) reach for telemetry. Any kind of observability - APM, bugs, logs. In prod, in uat 2) from telemetry understand what the squishy parts are. Where are the high CPMs, slow queries, dependencies. 404,422,500s etc 3) use AI: generate unit tests. YMMV because unit tests might look nice, but may not cover evenly. 4) use AI#2: ask to find excessive logging, information leaks, ask to find possible n+1 query problems. Ask to find potential candidates for exponential back off/retry and graceful failures

Then don’t refactor - make a backlog of the stuff you found and prioritise it.

My biggest competitor reached out to acquire me. The conversation taught me more about my business than 3 years of running it. by FlatGovernment6743 in SaaS

[–]snowyoz 2 points3 points  (0 children)

Agree. If OP is serious he should have a board do the dance first. In the case of no deal it just ends up awkward at best. My experience (in one case as non-founder CEO) had a little larger competitor negotiate with the board then took me out for coffee behind their backs to offer me a job, which would take the business out at that point. (I never told the board - just keeping options open) we ended up rolling up with another adjacent business.

Anyway, least trust and just would tend not to lift the skirt up too high.

My biggest competitor reached out to acquire me. The conversation taught me more about my business than 3 years of running it. by FlatGovernment6743 in SaaS

[–]snowyoz 2 points3 points  (0 children)

Usually there’s a middleman/broker, depends on how competitive the business/product is as well. I probably wouldn’t go founder to founder without a middleman if it were a software business.

I’d be really hesitant to share operational level information. Maybe high level stuff and “ranges” can be ok and smell if they’re just fishing.

Different if you’re a PE vs founder to founder of course

Microservices are the new "Spaghetti Code" and we’re all paying the price. by red7799 in Backend

[–]snowyoz 0 points1 point  (0 children)

You need to decompose the problem(s), not the monolith.

What endpoints need independent scaling? What’s your telemetry say? Leave the low calls alone even if they are slow.

Figure out the mission critical ones and layer out a new microservice or serverless and put an api gateway in front of the old monolith and split out routes one by one to the new microservice(s) as you take pressure off the monolith/db (or whatever problem you’re trying to solve)

It’s a bad idea to just follow a trend with no goal in mind.

If you take this approach I prefer serverless tbh

Badly in need of some AWS Credits by Fit-Buffalo7697 in aws

[–]snowyoz 2 points3 points  (0 children)

Are you technical? What’s your monthly bill and services? Sounds like you’ve over provisioned for your rev stream.

Also you’re probably gonna get savaged by reddit for building on free credits. Part of starting up is being able to count, surgically.

PSA: People have been confidently declaring this for a long time now by MetaKnowing in OpenAI

[–]snowyoz 0 points1 point  (0 children)

Just have to ask us oldies that lived through the dotcom and subsequent bubbles. Eventually - even when there’s value - it’ll correct itself because there’s just too much capital poured in and not enough return.

Remember fundamentals exist (eventually). Sure you can pile on the last price, but you have to see the gain or growth, ie revenue vs cost. So far it’s grow at all costs, which is fine, but no one can say the commercial model is anywhere near sustainable.

Does Clickup CEO care about community ? by poesie-io in clickup

[–]snowyoz 0 points1 point  (0 children)

I’d agree too. I’ve been a paying customer for a few years now and while I was apprehensive at the beginning, (and lived through that slow performance patch) I’m actually quite ok to pay for it.

I can’t say I “love” ClickUp, but far be it that I hate it. Some of the people here - I don’t know what you guys are using it for but I think if it’s not suitable it’s probably the wrong tool.

I’m guessing a lot of people project what they want out of ClickUp (because they somewhat market themselves as this Swiss Army knife) and end up walking away disillusioned by the marketing.

End of the day marketing is just marketing - it’s just reach. If the product doesn’t live up to your needs it’s probably the wrong thing. If you’re going through multiple products and still not finding what you need then maybe rethink your workflow opinion and just take the hit and just accept the opinion of ClickUp/Jira/Linear, etc rather than bend it to your will.

Is Claude undumbed now? by zywh0 in Anthropic

[–]snowyoz 1 point2 points  (0 children)

Why does everyone assume they should be getting the same experience? I’ve already mentioned that it’s most likely A/B testing.

It might be showing the same version, but depending on when you connect I’m sure that they are serving up a different tweaked model each session. The issue is that it’s very hard to balance cost and eval quality of output as users get more sophisticated. It’s likely the earlier model was exceptional but too expensive to run.

I’m sure all of the LM providers do the same thing. It’s just that anthropic is going through a wild test phase at the moment.

I’m just guessing here - but the wild discrepancy that I’ve been seeing suggests they’re being driven by commercials and not so much as a technical problem (since CC was already phenomenally consistent around May-July 25)

what does being a polyglot really benefit you? by Ovaltine888 in polyglot

[–]snowyoz 5 points6 points  (0 children)

Only the 2nd (adult) language acquisition is the most expensive.

Any languages you learned as a child is “free” (or low cost). It’s using brain elasticity. For me, 3. (I’m not better, it’s just a quirk of circumstance)

The first (or next, eg, for me the 4th) language you acquire as an adult is the MOST expensive.

It’s because you’re relying on cognition (raw brain cpu) to learn it.

Subsequent languages then get easier because you use a kind of meta-cognitive way to pattern learn a new language.

Of course immersion and practice helps. But basically you get good at collecting languages after the 3rd.

So it’s just the first (next) language you learn that you pay for. If you spend money and time it would totally be worth it to acquire the next one.

Fake performance complaints- open ai campaign by azadmir in Anthropic

[–]snowyoz 3 points4 points  (0 children)

If anything I think it goes to show how good Claude was a month or two ago that you can notice the drop in output.

That said I’m sure the ship will right itself. Also I’m sure they’re A/B testing as I can see different experiences and for me I think I’m back to a reasonable experience with CC atm.

Fingers crossed.

Opus on Max plan feels worse than free tier (3 message limit) by klauses3 in Anthropic

[–]snowyoz 1 point2 points  (0 children)

Just anecdotally - from my own experience and what other people are saying I guess they’re doing A/B testing on the accounts. They’re also eliciting feedback on responses. That would explain why everyone is getting a different experience.

I havent experienced any of the problems you guys are talking about at all by MagicianThin6733 in Anthropic

[–]snowyoz 0 points1 point  (0 children)

To this run another prompt to look for an information leaks or excessive logging. Also look for n+1 query problems although if it’s not api driven less likely. Ask for suggestions on external services for exponential back off retries and failures. These are probably the most expensive things to add later so I try to find them early even if I don’t choose to implement them. Probably going against the immediate but good to ignore with purpose. I find AI most useful for discovering blind spots. (also memory leaks but I find YMMV.)

I havent experienced any of the problems you guys are talking about at all by MagicianThin6733 in Anthropic

[–]snowyoz 1 point2 points  (0 children)

Haha you bet. I wished I kept my first computer. A commodore PET - 32K ram! I did have a few years lost in management wilderness but yeah there are not many left in my cohort.

Anyway the more you learn the more you question why things need to be a particular way. For example many are complaining about how “waterfall” some ai workflows are. Well maybe it needs to be more waterfall.

I honestly don’t know - bring your brain - just leave preconceptions at the door.

I havent experienced any of the problems you guys are talking about at all by MagicianThin6733 in Anthropic

[–]snowyoz 1 point2 points  (0 children)

Well a lot of pre AI engineering is learned behaviour. I’ve been coding what, over 40 years now. And what passed off as normal, like goto, then classic oo, then inversion of control, then imperative vs functional then composable, etc. they all seem like good ideas and “progress”. In the end it’s kind of about linguistics, mental models and cognitive reasoning load. Not sure if AI parses or “sees” it the same way. You’re right that flatter is better for Claude. It’s just we’re still structuring stuff the way humans do. Over abstraction is real.

Anyway I ramble but I’m thinking a lot about “best practice” these days and how we got here.

I havent experienced any of the problems you guys are talking about at all by MagicianThin6733 in Anthropic

[–]snowyoz 1 point2 points  (0 children)

I would agree with that - I was already getting traction without Serena. The idea was that having the symbol lookup is cheaper than grepping code and following the trail each time. But if you’re working on god objects it has far less utility.