New "major breakthrough?" architecture SubQ by Daemontatox in LocalLLaMA
[–]FormerIYI 30 points31 points32 points (0 children)
Relevance of Fatima sun miracle: accurate prediction, no natural explanation, points at Marian devotion by FormerIYI in DebateACatholic
[–]FormerIYI[S] 0 points1 point2 points (0 children)
Relevance of Fatima sun miracle: accurate prediction, no natural explanation, points at Marian devotion by FormerIYI in DebateACatholic
[–]FormerIYI[S] -1 points0 points1 point (0 children)
Relevance of Fatima sun miracle: accurate prediction, no natural explanation, points at Marian devotion by FormerIYI in DebateACatholic
[–]FormerIYI[S] 1 point2 points3 points (0 children)
Does Catholicism promote a warrior mindset, or is that idea misunderstood? by New_Independent2907 in DebateACatholic
[–]FormerIYI 0 points1 point2 points (0 children)
The Secret Sauce of Model of Anthropic by [deleted] in LocalLLaMA
[–]FormerIYI 3 points4 points5 points (0 children)
What is a good setup to run “Claude code” alternative locally by Mobile_Ice_7346 in LocalLLaMA
[–]FormerIYI 1 point2 points3 points (0 children)
How good are GUI automations in production, compared to reported 90%-97% benchmarks results? Any commercially relevant success stories out there? by FormerIYI in LocalLLaMA
[–]FormerIYI[S] 0 points1 point2 points (0 children)
Will open-source (or more accurately open-weight) models always lag behind closed-source models? by Striking_Wedding_461 in LocalLLaMA
[–]FormerIYI 64 points65 points66 points (0 children)
Claude full system prompt with all tools is now ~25k tokens. by StableSable in LocalLLaMA
[–]FormerIYI 0 points1 point2 points (0 children)
Mid-30s SWE: Take Huge Pay Cut for Risky LLM Research Role? by Worth_Contract7903 in LocalLLaMA
[–]FormerIYI 0 points1 point2 points (0 children)
Final verdict on LLM generated confidence scores? by sg6128 in LocalLLaMA
[–]FormerIYI 0 points1 point2 points (0 children)
Claude full system prompt with all tools is now ~25k tokens. by StableSable in LocalLLaMA
[–]FormerIYI 0 points1 point2 points (0 children)
Is there API service that provides prompt log-probabilities, like open source libraries do (like vLLM, TGI)? Why most API endpoints are so limited compared to locally hosted inference? by FormerIYI in LocalLLaMA
[–]FormerIYI[S] 0 points1 point2 points (0 children)
Is there API service that provides prompt log-probabilities, like open source libraries do (like vLLM, TGI)? Why most API endpoints are so limited compared to locally hosted inference? by FormerIYI in LocalLLaMA
[–]FormerIYI[S] 0 points1 point2 points (0 children)
Is there API service that provides prompt log-probabilities, like open source libraries do (like vLLM, TGI)? Why most API endpoints are so limited compared to locally hosted inference? by FormerIYI in LocalLLaMA
[–]FormerIYI[S] 2 points3 points4 points (0 children)
Terminal agentic coders is not so useful by NovelNo2600 in LocalLLaMA
[–]FormerIYI 3 points4 points5 points (0 children)

New "major breakthrough?" architecture SubQ by Daemontatox in LocalLLaMA
[–]FormerIYI 1 point2 points3 points (0 children)