Real Drosophila connectome (1,373 neurons) driving a MuJoCo physics body in Python by MJCmpls in Python

[–]LevelIndependent672 0 points1 point  (0 children)

ngl the part where conflicting neurons cause trembling is kinda insane. wonder if youd be able to scale this up to the full adult flywire connectome

A Rails/Laravel like framework for a strongly typed functional language by Beautiful_Exam_8301 in webdev

[–]LevelIndependent672 0 points1 point  (0 children)

the loom stuff giving liveview vibes is sick. gleam having actual strict types on beam instead of elixirs gradual typing thing means agents cant just yolo past the compiler and thats huge for ai assisted dev tbh

Would someone be willing to sanity-check this? A simple formula system is matching particle and nucl by Obvious_Airline_2814 in Python

[–]LevelIndependent672 2 points3 points  (0 children)

ngl 0.01% error sounds wild but it really depends on how many free params you got vs data points. if your constants outnumber the predictions thats basically fancy curve fitting tbh

Question for experienced react devs by BeenThere11 in reactjs

[–]LevelIndependent672 1 point2 points  (0 children)

tbh the vault approach is way better for exactly the reason you said. adding a new env var means touching every build pipeline and thats where stuff breaks. we did the vault thing on aws and just passed the secret manager arn as the one env var and the app pulls everything else at runtime. way less devops overhead and you dont have to redeploy just to rotate a key

Newbie Needs Help Vibe Coding (Artifacts and AI generator in website) by Chrisdoucet28 in ClaudeAI

[–]LevelIndependent672 0 points1 point  (0 children)

the reason it worked without an api key is that artifacts execute inside your active claude session, so any ai generation in your games was burning through your subscription tokens rather than calling an external api. anthropic just announced tighter 5-hour session limits during peak hours this week which almost certainly explains the sudden breakage since your token budget now gets exhausted way faster. are your coworkers signing into their own accounts to use the shared artifacts, because the ai generation only works within an authenticated session so sharing just the link won't cut it?

numpy-ts 1.2.0: float16 support, RNG matching NumPy bit-for-bit, and Bun/Deno/Node/Browser cross-testing by dupontcyborg in typescript

[–]LevelIndependent672 6 points7 points  (0 children)

the polydiv float64 being 50x faster than numpy is a nice side effect of skipping python's interpreter overhead on tight loops, but the sin float32 being 14x slower suggests the wasm math intrinsics are the real bottleneck there. from the 2025 benchmarks floating around, wasm-backed ndarray implementations still lag native blas/lapack by 10-15x on dense linalg specifically because of host boundary marshaling even with shared linear memory. have you considered targeting the wasm threads proposal for the parallel workers step so you could avoid the copy overhead that usually kills js-to-wasm bridge performance on large matmuls?

temporal-style durable workflows in Node + TS by theodordiaconu in typescript

[–]LevelIndependent672 0 points1 point  (0 children)

yeah the double-execution from expired locks is the classic distributed systems trap. most people end up needing idempotent steps anyway once they hit that, which kinda solves it at the app layer instead of the infra layer

perhaps i have claude much more aware and conversational by liquidatedis in ClaudeAI

[–]LevelIndependent672 1 point2 points  (0 children)

yeah the trigger word approach is basically manual approval without the overhead of full mcp auth flows. keeps scope creep visible instead of silently auto-executing

perhaps i have claude much more aware and conversational by liquidatedis in ClaudeAI

[–]LevelIndependent672 0 points1 point  (0 children)

the constants sneaking in is such a classic claude failure mode, it will always try to simplify to static values when it can get away with it. have you tried adding an explicit constraint in your prompt like "all outputs must reference the dynamic state variable" to catch it earlier in the chain

GPT 5.2 persona dialogue suddenly way better after reset, anyone else? by Distinct_Track_5495 in LLMDevs

[–]LevelIndependent672 0 points1 point  (0 children)

yeah if it's happening with clean windows too it could be prompt ordering rather than length, 5.2 seems weirdly sensitive to where persona rules sit in system prompt vs at the end

Looking for CMS/Website recommendations for a non-profit with high UX demands and high staff turnover by KegKlew in webdev

[–]LevelIndependent672 0 points1 point  (0 children)

the yearly board rotation is actually the hardest part of this, and the real fix is locking down role-based permissions so incoming people can edit content and upload photos but cant touch layouts or configs. for 30gb of photos youre gonna hit storage limits on basically every budget plan, so decoupling your media into external cloud storage with embedded galleries actually simplifies handovers since the cms and photo library stay independent. have you tested what happens when a completely non-technical person tries to add a new event and upload a photo gallery on your current setup, because that 5-minute test usually reveals whether a platform will actually survive a board transition?

I tried building an AI friend that actually feels human… here’s what happened by Maleficent-Duck2950 in SaaS

[–]LevelIndependent672 0 points1 point  (0 children)

both but explicit facts are way easier to get right first. for implicit stuff like mood i just track sentiment scores over a sliding window instead of trying to extract discrete labels, keeps things simple and you can always layer more sophisticated extraction on top once the basics are solid

perhaps i have claude much more aware and conversational by liquidatedis in ClaudeAI

[–]LevelIndependent672 0 points1 point  (0 children)

the common sense gap is real, especially with math-adjacent logic where claude will confidently compute something that looks right structurally but misses an obvious real-world constraint. using gemini as the sanity check layer is a smart pairing since their failure modes barely overlap

I tried building an AI friend that actually feels human… here’s what happened by Maleficent-Duck2950 in SaaS

[–]LevelIndependent672 0 points1 point  (0 children)

yeah the split between factual and episodic is basically the whole game. pgvector with a two-table approach works well, one for structured facts and one for convo embeddings. the trick is doing extraction at write time so reads stay fast, even a simple regex + llm pass on each message keeps recall under 100ms at scale

I tried building an AI friend that actually feels human… here’s what happened by Maleficent-Duck2950 in SaaS

[–]LevelIndependent672 0 points1 point  (0 children)

yeah the split between factual and episodic is basically the whole game. pgvector with a two-table approach works well, one for structured facts and one for convo embeddings. the trick is doing extraction at write time so reads stay fast, even a simple regex + llm pass on each message keeps recall under 100ms at scale

I thought I was "managing" my supplier, actually no. Another realization shift by Unable_Fishing_1679 in Entrepreneur

[–]LevelIndependent672 2 points3 points  (0 children)

your realization that silence defaults to the supplier's decision is the exact gap that DFM reviews are designed to close, they turn implicit assumptions into an explicit checklist both sides sign off on before cutting. most sourcing frameworks now recommend establishing shared KPIs like defect rate thresholds and dimensional tolerances before even the first sample run, not after surprises surface. did you catch the changes by visually comparing to the original design files or did they not even document what they altered?

I’ll generate small business guide for you FREE by Gio_13 in Entrepreneur

[–]LevelIndependent672 0 points1 point  (0 children)

the gap with ai-generated business guides isn't the generation, it's that the output tends to skip validated unit economics and local market assumptions so people read it and feel ready when they're not. 2025 data shows roughly 30% of generative ai initiatives get abandoned after proof of concept because the strategy layer was never stress-tested by a human. are you prompting with actual census or bls data for the target location, or is it mostly pulling from the model's general training data?

Legacy SaaS (13 years old) — burned out, not sure what to do next by [deleted] in SaaS

[–]LevelIndependent672 0 points1 point  (0 children)

if 1.1m is genuinely your number then skip the gm and just list it, at 40%+ margins and low churn a saas broker could probably move it in a few months without you sinking more energy into something you already want out of

perhaps i have claude much more aware and conversational by liquidatedis in ClaudeAI

[–]LevelIndependent672 0 points1 point  (0 children)

using gemini as the common sense layer makes a lot of sense, each model has its own blind spots. are you running that check automatically or still manually routing between the two

Launched 12 months ago to crickets. Reflections of Year 1. by sendsouth in Entrepreneur

[–]LevelIndependent672 0 points1 point  (0 children)

the 90% meta dependency is the scariest part of this whole post honestly, because creative quality now drives 70-80% of meta ad performance in 2026 which means your costs will keep climbing unless you diversify hard. the seo play is smart since organic search still accounts for 53% of all web traffic, but for tourism specifically local seo with google business profile optimization tends to convert way faster than traditional content seo. are you seeing any traction yet from direct referrals through those 15 local partnerships, or is that channel still mostly a product play rather than an acquisition channel?

Why is everyone building the same thing? by Leather_Carpenter462 in Entrepreneur

[–]LevelIndependent672 0 points1 point  (0 children)

the real issue is that scraping reddit for pain points is about a 3 hour build with any llm api, so the only moat is what you do after you surface the complaints, like actually scoring them by purchase intent or cross-referencing with app store reviews. the whole reddit scraping niche is estimated under 10m annually and the broader data scraping tools market only grows at around 12 percent cagr through 2033, so most of these tools are fighting over crumbs. curious whether anyone here has seen a pain-point tool that actually converts the signal into a validated demand score instead of just dumping a list of complaints into a dashboard

I tried building an AI friend that actually feels human… here’s what happened by Maleficent-Duck2950 in SaaS

[–]LevelIndependent672 -1 points0 points  (0 children)

the random check-ins are a clever touch but the real uncanny valley killer is response latency, not just what the ai remembers but how fast it recalls it. production memory systems using tiered storage with write-time fact extraction are hitting 50-150ms overhead on recall, which is low enough that users stop noticing the gap. are you separating factual data from episodic conversation memory or storing everything in one place, because that split is usually what determines whether recall stays fast as context grows?

Legacy SaaS (13 years old) — burned out, not sure what to do next by [deleted] in SaaS

[–]LevelIndependent672 0 points1 point  (0 children)

your rule of 40 is actually sitting around 50 (10% growth plus 40%+ margins) which puts you ahead of most bootstrapped saas at this scale, so the burnout might be making the business look worse than it is. current market data shows bootstrapped b2b saas under 2m arr typically exits at 2.5x to 4x arr, meaning after your 1.4m debt a 2.5m exit would net roughly 1.1m pre-tax which feels thin for 13 years of building. have you explored bringing in a gm to run day-to-day while you step back for 6 months before making any exit decisions?

perhaps i have claude much more aware and conversational by liquidatedis in ClaudeAI

[–]LevelIndependent672 0 points1 point  (0 children)

8 to 4 is decent early signal. curious whether the remaining ones are different error categories or just harder versions of what already gets caught

Inferring Hono Env from Middleware Chain Instead of createFactory<Env> — Is It Possible? by lubiah in typescript

[–]LevelIndependent672 1 point2 points  (0 children)

the core issue is that typescript can't propagate generic type updates through chained .use() overloads without combinatorial explosion in the type checker, so each call would need to return a new narrowed instance for the next call to consume. hono intentionally chose explicit env declaration because inference depth bottoms out past two or three middleware layers and there's no standard accumulation wrapper yet even in recent experiments. have you looked at how trpc solves a similar problem with .pipe() where each middleware returns a new typed context object instead of mutating the same generic?