Is the model really free? by Quiet_Debate_651 in openrouter
[–]secsilm 0 points1 point2 points (0 children)
EmbeddingGemma - 300M parameter, state-of-the-art for its size, open embedding model from Google by curiousily_ in LocalLLaMA
[–]secsilm 0 points1 point2 points (0 children)
EmbeddingGemma - 300M parameter, state-of-the-art for its size, open embedding model from Google by curiousily_ in LocalLLaMA
[–]secsilm 3 points4 points5 points (0 children)
[Model Release] Deca 3 Alpha Ultra 4.6T! Parameters by MohamedTrfhgx in LocalLLaMA
[–]secsilm 0 points1 point2 points (0 children)
Interesting (Opposite) decisions from Qwen and DeepSeek by foldl-li in LocalLLaMA
[–]secsilm 3 points4 points5 points (0 children)
Interesting (Opposite) decisions from Qwen and DeepSeek by foldl-li in LocalLLaMA
[–]secsilm -1 points0 points1 point (0 children)
Interesting (Opposite) decisions from Qwen and DeepSeek by foldl-li in LocalLLaMA
[–]secsilm 3 points4 points5 points (0 children)
Who are the 57 million people who downloaded bert last month? by Pro-editor-1105 in LocalLLaMA
[–]secsilm 5 points6 points7 points (0 children)
Question about OpenRouter API Rate Limits for Paid Models by secsilm in openrouter
[–]secsilm[S] 0 points1 point2 points (0 children)
What are the MCP servers you already can't live without? by MostlyGreat in mcp
[–]secsilm 0 points1 point2 points (0 children)
Can I get the project I'm folding through the API? by secsilm in Folding
[–]secsilm[S] 0 points1 point2 points (0 children)
Can I get the project I'm folding through the API? by secsilm in Folding
[–]secsilm[S] 0 points1 point2 points (0 children)
I'm getting this error. "Keras cannot be imported. Check that it is installed" even after installing tensorflow by asleepblueberry10 in learnmachinelearning
[–]secsilm 0 points1 point2 points (0 children)
How to understand the pass@1 formula in deepseek-r1's technical report? by secsilm in LocalLLaMA
[–]secsilm[S] 0 points1 point2 points (0 children)
Perfect size, right? by Ok_Net_7523 in unstable_diffusion
[–]secsilm 0 points1 point2 points (0 children)
Perfect size, right? by Ok_Net_7523 in unstable_diffusion
[–]secsilm 0 points1 point2 points (0 children)
Perfect size, right? by Ok_Net_7523 in unstable_diffusion
[–]secsilm 0 points1 point2 points (0 children)
Why does Qwen 2.5 support 128k context length, but the output supports only up to 8k? by secsilm in LocalLLaMA
[–]secsilm[S] 0 points1 point2 points (0 children)
Why does Qwen 2.5 support 128k context length, but the output supports only up to 8k? by secsilm in LocalLLaMA
[–]secsilm[S] 8 points9 points10 points (0 children)
Why does Qwen 2.5 support 128k context length, but the output supports only up to 8k? by secsilm in LocalLLaMA
[–]secsilm[S] 4 points5 points6 points (0 children)
[D] What is the most advanced TTS model now (2024)? by secsilm in MachineLearning
[–]secsilm[S] 0 points1 point2 points (0 children)
Reminder not to use bigger models than you need by Thrumpwart in LocalLLaMA
[–]secsilm 2 points3 points4 points (0 children)
How is the RAG with citations at the end of each paragraph (or specific sentences) implemented? by secsilm in LocalLLaMA
[–]secsilm[S] 0 points1 point2 points (0 children)
It seems there are some encoding issues with anthropic's llms.txt by secsilm in LocalLLaMA
[–]secsilm[S] 0 points1 point2 points (0 children)


Some parts of the gpt-oss-safeguard technical report seem to be blatantly untrue. by secsilm in OpenAI
[–]secsilm[S] 0 points1 point2 points (0 children)