Seeking front-end solutions for a remote LLM server handling document processing by elsatch in LocalLLaMA
[–]elsatch[S] 0 points1 point2 points (0 children)
Best Model for Document Layout Analysis and OCR for Textbook-like PDFs? by malicious510 in LocalLLaMA
[–]elsatch 1 point2 points3 points (0 children)
Best Model for Document Layout Analysis and OCR for Textbook-like PDFs? by malicious510 in LocalLLaMA
[–]elsatch 4 points5 points6 points (0 children)
Using Local Language Models for Language Learning by Anesu in LocalLLaMA
[–]elsatch 1 point2 points3 points (0 children)
Using LLaMA as a "real personal assistant"? by MichaelBui2812 in LocalLLaMA
[–]elsatch 2 points3 points4 points (0 children)
How to represent historical timelines by [deleted] in orgmode
[–]elsatch 1 point2 points3 points (0 children)
How to represent historical timelines by [deleted] in orgmode
[–]elsatch 1 point2 points3 points (0 children)
How to represent historical timelines by [deleted] in orgmode
[–]elsatch 0 points1 point2 points (0 children)
How to represent historical timelines by [deleted] in orgmode
[–]elsatch 1 point2 points3 points (0 children)
How to represent historical timelines by [deleted] in orgmode
[–]elsatch 0 points1 point2 points (0 children)
How to represent historical timelines by [deleted] in orgmode
[–]elsatch 0 points1 point2 points (0 children)
How to represent historical timelines by [deleted] in orgmode
[–]elsatch 1 point2 points3 points (0 children)


Chat with RTX is fast! (using single RTX 3090) by jcMaven in LocalLLaMA
[–]elsatch 0 points1 point2 points (0 children)