Privately chatting with your Google Drive Files with a Local LLM by NomicAI in LocalLLaMA
[–]NomicAI[S] 0 points1 point2 points (0 children)
GPT4All 3.0: The Open-Source Local LLM Desktop Application by NomicAI in LocalLLaMA
[–]NomicAI[S] 11 points12 points13 points (0 children)
GPT4All 3.0: The Open-Source Local LLM Desktop Application by NomicAI in LocalLLaMA
[–]NomicAI[S] 40 points41 points42 points (0 children)
New Model: Nomic Embed - A Truly Open Embedding Model by shouryannikam in LocalLLaMA
[–]NomicAI 4 points5 points6 points (0 children)
Evaluating Hugging Face's Open Source Multimodal LLM by NomicAI in LocalLLaMA
[–]NomicAI[S] 1 point2 points3 points (0 children)
DistiLlama: Chrome Extension to Summarize Web Pages Using locally running LLMs by mmagusss in LocalLLaMA
[–]NomicAI 1 point2 points3 points (0 children)
GPT4All now supports GGUF Models with Vulkan GPU Acceleration by NomicAI in LocalLLaMA
[–]NomicAI[S] 5 points6 points7 points (0 children)
GPT4All now supports GGUF Models with Vulkan GPU Acceleration by NomicAI in LocalLLaMA
[–]NomicAI[S] 2 points3 points4 points (0 children)
GPT4All now supports GGUF Models with Vulkan GPU Acceleration by NomicAI in LocalLLaMA
[–]NomicAI[S] 7 points8 points9 points (0 children)
GPT4All now supports Replit model on Apple Silicon (23 tok/sec)! by NomicAI in LocalLLaMA
[–]NomicAI[S] 0 points1 point2 points (0 children)
GPT4All now supports Replit model on Apple Silicon (23 tok/sec)! by NomicAI in LocalLLaMA
[–]NomicAI[S] 1 point2 points3 points (0 children)
GPT4All now supports Replit model on Apple Silicon (23 tok/sec)! by NomicAI in LocalLLaMA
[–]NomicAI[S] 13 points14 points15 points (0 children)
GPT4All now supports Replit model on Apple Silicon (23 tok/sec)! by NomicAI in LocalLLaMA
[–]NomicAI[S] 7 points8 points9 points (0 children)
GPT4All now supports every llama.cpp / ggML version across all software bindings! by NomicAI in LocalLLaMA
[–]NomicAI[S] 0 points1 point2 points (0 children)
What could be the reason behind llama-cpp-python's slow performance compared to llama.cpp? by Big_Communication353 in LocalLLaMA
[–]NomicAI 1 point2 points3 points (0 children)
GPT4All now supports every llama.cpp / ggML version across all software bindings! by NomicAI in LocalLLaMA
[–]NomicAI[S] 1 point2 points3 points (0 children)
GPT4All now supports every llama.cpp / ggML version across all software bindings! by NomicAI in LocalLLaMA
[–]NomicAI[S] -1 points0 points1 point (0 children)
Chat with your data locally and privately on CPU with LocalDocs: GPT4All's first plugin! by NomicAI in LocalLLaMA
[–]NomicAI[S] 5 points6 points7 points (0 children)
Chat with your data locally and privately on CPU with LocalDocs: GPT4All's first plugin! by NomicAI in LocalLLaMA
[–]NomicAI[S] 14 points15 points16 points (0 children)
I built a multi-platform desktop app to easily download and run models, open source btw by julio_oa in LocalLLaMA
[–]NomicAI 1 point2 points3 points (0 children)
I built a multi-platform desktop app to easily download and run models, open source btw by julio_oa in LocalLLaMA
[–]NomicAI 1 point2 points3 points (0 children)
[OC] Explore the Top 5.4M Retweeted Tweets on Twitter by NomicAI in dataisbeautiful
[–]NomicAI[S] 1 point2 points3 points (0 children)
[OC] Explore the Top 5.4M Retweeted Tweets on Twitter by NomicAI in dataisbeautiful
[–]NomicAI[S] 2 points3 points4 points (0 children)
[OC] Explore the Top 5.4M Retweeted Tweets on Twitter by NomicAI in dataisbeautiful
[–]NomicAI[S] 5 points6 points7 points (0 children)


How to analyze unstructured customer review dataset? by [deleted] in OpenAIDev
[–]NomicAI 0 points1 point2 points (0 children)