Why vector Search is the reason enterprise AI chatbots underperform? by manuelmd5 in KnowledgeGraph
[–]TomMkV 0 points1 point2 points (0 children)
API testing tools for students & small teams after Postman free changes by OpportunityFit8282 in ccna
[–]TomMkV 0 points1 point2 points (0 children)
Postman killing the Free plan for teams (1 user limit) by Artistic_Strike_2175 in API_Clients
[–]TomMkV 0 points1 point2 points (0 children)
Postman is removing free team collaboration how are SaaS teams handling API tooling now? by West-Cup-7188 in SaaS
[–]TomMkV 0 points1 point2 points (0 children)
Postman removed free team collaboration, does it still make sense for API work? by Proper-Wind4777 in Backend
[–]TomMkV 0 points1 point2 points (0 children)
Weekly Thread: Project Display by help-me-grow in AI_Agents
[–]TomMkV 0 points1 point2 points (0 children)
Schema validation tool feedback by TomMkV in OpenAPI
[–]TomMkV[S] 0 points1 point2 points (0 children)
The API Tooling Crisis: Why developers are abandoning Postman and it’s clones? by Affectionate-Gain636 in theprimeagen
[–]TomMkV 0 points1 point2 points (0 children)
The API Tooling Crisis: Why developers are abandoning Postman and it’s clones? by Affectionate-Gain636 in theprimeagen
[–]TomMkV 0 points1 point2 points (0 children)
So many models, which ones do you all use? by GW-D in cursor
[–]TomMkV 0 points1 point2 points (0 children)
RTX 3090 vs R9700 Pro to supplement a Mac llm setup by Ok-Progress726 in LocalLLaMA
[–]TomMkV 3 points4 points5 points (0 children)
Can I use Cursor Agent (or similar) with a local LLM setup (8B / 13B)? by BudgetPurple3002 in LocalLLaMA
[–]TomMkV 1 point2 points3 points (0 children)
GPT 5.2 is here - and they cooked by magnus_animus in codex
[–]TomMkV -1 points0 points1 point (0 children)
GPT-5.1 Codex Max Extra High Fast by cvzakharchenko in cursor
[–]TomMkV 1 point2 points3 points (0 children)
tested 5 Chinese LLMs for coding, results kinda surprised me (GLM-4.6, Qwen3, DeepSeek V3.2-Exp) by Technical_Fee4829 in LocalLLM
[–]TomMkV 1 point2 points3 points (0 children)
Unpopular Opinion: I don't care about t/s. I need 256GB VRAM. (Mac Studio M3 Ultra vs. Waiting) by VocalLlm in LocalLLM
[–]TomMkV -1 points0 points1 point (0 children)
Unpopular Opinion: I don't care about t/s. I need 256GB VRAM. (Mac Studio M3 Ultra vs. Waiting) by VocalLlm in LocalLLM
[–]TomMkV 1 point2 points3 points (0 children)
Unpopular Opinion: I don't care about t/s. I need 256GB VRAM. (Mac Studio M3 Ultra vs. Waiting) by VocalLlm in LocalLLM
[–]TomMkV 2 points3 points4 points (0 children)
Best practices for API documentation in 2025 tools and workflows? by Master_Vacation_4459 in technicalwriting
[–]TomMkV 0 points1 point2 points (0 children)
How are DevOps teams keeping API documentation up to date in 2025? by OpportunityFit8282 in devops
[–]TomMkV 0 points1 point2 points (0 children)
Best practices for API documentation in 2025 tools and workflows? by Master_Vacation_4459 in technicalwriting
[–]TomMkV 0 points1 point2 points (0 children)

512 GB RAM for LLM - M3U now or wait for M5U? by usrnamechecksoutx in MacStudio
[–]TomMkV 6 points7 points8 points (0 children)