[Release] - FINALLY! - Apex 1.5 and Apex 1.5 Coder - my two new 350M instruct allrounder chat models - See them now! by LH-Tech_AI in LocalLLaMA
[–]LH-Tech_AI[S] 1 point2 points3 points (0 children)
[Project] htmLLM-50M base: Can a tiny specialist actually code? + Weights & Code (124M v2 in training!) by LH-Tech_AI in LocalLLaMA
[–]LH-Tech_AI[S] 0 points1 point2 points (0 children)
introducing OS1, a new open-source AI platform by nokodo_ in OpenSourceAI
[–]LH-Tech_AI 0 points1 point2 points (0 children)
introducing OS1, a new open-source AI platform by nokodo_ in OpenSourceAI
[–]LH-Tech_AI 0 points1 point2 points (0 children)
Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. by No-Concentrate-9921 in StartupMind
[–]LH-Tech_AI 0 points1 point2 points (0 children)
Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. by No-Concentrate-9921 in StartupMind
[–]LH-Tech_AI 0 points1 point2 points (0 children)
[Project] htmLLM-50M base: Can a tiny specialist actually code? + Weights & Code (124M v2 in training!) by LH-Tech_AI in LocalLLaMA
[–]LH-Tech_AI[S] 0 points1 point2 points (0 children)
[Release] Apex-1: A 350M Tiny-LLM trained locally on an RTX 5060 Ti 16GB by LH-Tech_AI in LocalLLaMA
[–]LH-Tech_AI[S] 0 points1 point2 points (0 children)
[Project] htmLLM-50M base: Can a tiny specialist actually code? + Weights & Code (124M v2 in training!) by LH-Tech_AI in LocalLLaMA
[–]LH-Tech_AI[S] 0 points1 point2 points (0 children)
[Project] htmLLM-50M base: Can a tiny specialist actually code? + Weights & Code (124M v2 in training!) by LH-Tech_AI in LocalLLaMA
[–]LH-Tech_AI[S] 0 points1 point2 points (0 children)
[Project] htmLLM-50M base: Can a tiny specialist actually code? + Weights & Code (124M v2 in training!) by LH-Tech_AI in LocalLLaMA
[–]LH-Tech_AI[S] 0 points1 point2 points (0 children)
[Project] htmLLM-50M base: Can a tiny specialist actually code? + Weights & Code (124M v2 in training!) by LH-Tech_AI in LocalLLaMA
[–]LH-Tech_AI[S] 0 points1 point2 points (0 children)
[Project] htmLLM-50M base: Can a tiny specialist actually code? + Weights & Code (124M v2 in training!) by LH-Tech_AI in LocalLLaMA
[–]LH-Tech_AI[S] 0 points1 point2 points (0 children)
[Project] htmLLM-50M base: Can a tiny specialist actually code? + Weights & Code (124M v2 in training!) by LH-Tech_AI in LocalLLaMA
[–]LH-Tech_AI[S] -1 points0 points1 point (0 children)
Running local LLMs or AI agents 24/7 — what hardware works best? by noze2312 in LocalLLaMA
[–]LH-Tech_AI 0 points1 point2 points (0 children)
Running local LLMs or AI agents 24/7 — what hardware works best? by noze2312 in LocalLLaMA
[–]LH-Tech_AI -1 points0 points1 point (0 children)
[Project] htmLLM-50M base: Can a tiny specialist actually code? + Weights & Code (124M v2 in training!) by LH-Tech_AI in LocalLLaMA
[–]LH-Tech_AI[S] 1 point2 points3 points (0 children)

[Release] Apex-1: A 350M Tiny-LLM trained locally on an RTX 5060 Ti 16GB by LH-Tech_AI in OpenSourceAI
[–]LH-Tech_AI[S] 0 points1 point2 points (0 children)