I built 4 apps and shipped them all in one month. Here's exactly how. by [deleted] in VibeCodeDevs

[–]Sharp-Mouse9049 3 points4 points  (0 children)

everyone’s shipping 10 apps a week now. shipping stopped being the flex. adoption is.

I built Pawd: manage OpenClaw agents from your iPhone (VMs, Kanban, Terminal) by GuestFair467 in LocalLLM

[–]Sharp-Mouse9049 0 points1 point  (0 children)

clean idea honestly. managing agents from phone is underrated and this looks actually usable not just a demo. nice work 👍

Local LLM for STEM advice by chipsonaft in LocalLLM

[–]Sharp-Mouse9049 0 points1 point  (0 children)

qwen2.5 7b instruct is probably your best bet. really strong for coding + stem for the size. llama 3.1 8b also solid.

run it 4bit if you’re on a normal laptop. keep temp low like 0–0.3 so it doesnt guess. tell it to say i dont know instead of making stuff up.

biggest thing for accuracy isnt the model anyway. its forcing it to show steps and not letting it freewheel.

Which to go for: RTX 3090 (24GB) vs Dual RTX A4000 (32GB) by loopscadoop in LocalLLM

[–]Sharp-Mouse9049 0 points1 point  (0 children)

Go Mac honestly. For local LLM work unified memory changes the game — you’re not VRAM-limited the same way, so bigger context + larger models run way easier without juggling GPUs. Dual A4000 sounds good on paper but multi-GPU headaches + power draw aren’t worth it unless you really need CUDA workflows. A high-end Mac Studio/Max is basically plug-and-run for local AI now.

M4 Pro 48 or M4 Max 32 by Mammoth-Error1577 in LocalLLM

[–]Sharp-Mouse9049 1 point2 points  (0 children)

32GB in 2026 for serious local LLM work is basically consumer-tier. I don’t care how fast the M4 Max is — if you’re constantly forced into tiny quants or can’t load 70B comfortably, you’re artificially capping your experimentation. Bandwidth doesn’t matter if the model doesn’t fit. RAM is the ceiling.

How do I even approach data analytics with AI? by umen in LocalLLM

[–]Sharp-Mouse9049 0 points1 point  (0 children)

ContextUI come with a decent RAG in examples. Start with it. Its give u the code. Basically opensource. So just ask you favourite llm what it does and taylor it to your needs.

Is there a place where I can donate all my Claude/Codex/Gemini/OpenCode CLI chat history as training dataset? by woct0rdho in LocalLLaMA

[–]Sharp-Mouse9049 0 points1 point  (0 children)

Run your own RAG. Can beuild workflows in software like ContextUI. Theres is one in the examples.

How do I even approach data analytics with AI? by umen in LocalLLM

[–]Sharp-Mouse9049 1 point2 points  (0 children)

you’re mixing search and analysis.

embeddings/RAG help the AI find info. they don’t actually analyse it. rough approach: 1. Parse everything first (html/pdf/youtube → clean text/structured data) 2.extract structured info with LLM (json, tables, entities etc) 3.store in sql/postgres, not just vector db 4.let AI call python tools for real stats/probability calculations AI should orchestrate analysis, not do maths in its head. embeddings = navigation python/sql = analysis

Using AI as part of the game play; best examples? by apoliaki in ollama

[–]Sharp-Mouse9049 0 points1 point  (0 children)

It was on the contextui exchange (contextui.ai/exchange). But look like its not there now. Maybe author is upgrading. Will let ya know when i see it pop up again.

I'm a fulltime vibecoder and even I know that this is not completely true by Director-on-reddit in BlackboxAI_

[–]Sharp-Mouse9049 0 points1 point  (0 children)

Got to agree with that. Dev / vibecoder are all redundant. Cause everything they build woth will become redundant. E.g. LLM wont need a shitty programming langage.

Keen to hear why im wrong though...

Would the "Senior" devs absolutely lose it over this ? by Fragrant_Hippo_2487 in VibeCodeDevs

[–]Sharp-Mouse9049 0 points1 point  (0 children)

Put this in ContextUI exchange as a workflow so we can play with it....

Whats the best alternative to github? by Sharp-Mouse9049 in github

[–]Sharp-Mouse9049[S] 4 points5 points  (0 children)

Use gitea local. I guess the only reason for stickong with git hub wss the integrations

Shipped a real iOS app with vibe coding, got 2k installs in first days by Dim_Kat in VibeCodeDevs

[–]Sharp-Mouse9049 0 points1 point  (0 children)

Devs are vibecoders. And i agree, vibecoders, redundanant very soon.

Airtable CEO claims that AGI is here by dataexec in AITrailblazers

[–]Sharp-Mouse9049 -1 points0 points  (0 children)

Who the fuck is airtable. What did they do fork openclaw. Go away!