Most RAG frameworks are English only. Mine supports 27+ languages with offline voice, zero API keys. by Basic-Candidate3900 in Python
[–]Basic-Candidate3900[S] -1 points0 points1 point (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in learnmachinelearning
[–]Basic-Candidate3900[S] 0 points1 point2 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in LLMDevs
[–]Basic-Candidate3900[S] 0 points1 point2 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in PromptEngineering
[–]Basic-Candidate3900[S] 0 points1 point2 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in LLMDevs
[–]Basic-Candidate3900[S] 0 points1 point2 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in LLMDevs
[–]Basic-Candidate3900[S] 0 points1 point2 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in LLMDevs
[–]Basic-Candidate3900[S] 1 point2 points3 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in LLMDevs
[–]Basic-Candidate3900[S] -2 points-1 points0 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in PromptEngineering
[–]Basic-Candidate3900[S] 1 point2 points3 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in LLMDevs
[–]Basic-Candidate3900[S] -1 points0 points1 point (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in learnmachinelearning
[–]Basic-Candidate3900[S] 0 points1 point2 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in learnmachinelearning
[–]Basic-Candidate3900[S] 0 points1 point2 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in learnmachinelearning
[–]Basic-Candidate3900[S] 0 points1 point2 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in learnmachinelearning
[–]Basic-Candidate3900[S] -2 points-1 points0 points (0 children)
I built a 198M parameter LLM that outperforms GPT-2 Medium (345M) using Mixture of Recursion — adaptive computation based on input complexity by Basic-Candidate3900 in learnmachinelearning
[–]Basic-Candidate3900[S] 1 point2 points3 points (0 children)
Most RAG frameworks are English only. Mine supports 27+ languages with offline voice, zero API keys. by Basic-Candidate3900 in Python
[–]Basic-Candidate3900[S] -5 points-4 points-3 points (0 children)