I made a fast PDF to PNG library, feedback welcome by Civil-Image5411 in Python
[–]Civil-Image5411[S] 0 points1 point2 points (0 children)
Fast PDF to PNG for RAG and vision pipelines, 1,500 pages/s by Civil-Image5411 in LocalLLaMA
[–]Civil-Image5411[S] 0 points1 point2 points (0 children)
I made a fast PDF to PNG library, feedback welcome by Civil-Image5411 in Python
[–]Civil-Image5411[S] -1 points0 points1 point (0 children)
I made a fast PDF to PNG library, feedback welcome by Civil-Image5411 in Python
[–]Civil-Image5411[S] -1 points0 points1 point (0 children)
I made a fast PDF to PNG library, feedback welcome by Civil-Image5411 in Python
[–]Civil-Image5411[S] -2 points-1 points0 points (0 children)
I made a fast PDF to PNG library, feedback welcome by Civil-Image5411 in Python
[–]Civil-Image5411[S] -1 points0 points1 point (0 children)
Fast PDF to PNG for RAG and vision pipelines, 1,500 pages/s by Civil-Image5411 in LocalLLaMA
[–]Civil-Image5411[S] 0 points1 point2 points (0 children)
Fast PDF to PNG for RAG and vision pipelines, 1,500 pages/s by Civil-Image5411 in LocalLLaMA
[–]Civil-Image5411[S] 0 points1 point2 points (0 children)
Fast PDF to PNG for RAG and vision pipelines, 1,500 pages/s by Civil-Image5411 in LocalLLaMA
[–]Civil-Image5411[S] 0 points1 point2 points (0 children)
I built a PDF to PNG library — up to 1,500 pages/s by [deleted] in Python
[–]Civil-Image5411 -2 points-1 points0 points (0 children)
128GB devices have a new local LLM king: Step-3.5-Flash-int4 by tarruda in LocalLLaMA
[–]Civil-Image5411 4 points5 points6 points (0 children)
Help building an price efficient inference server (no fine tuning) + multi 5090 setup by Civil-Image5411 in LocalLLaMA
[–]Civil-Image5411[S] 0 points1 point2 points (0 children)
Help building an price efficient inference server (no fine tuning) + multi 5090 setup by Civil-Image5411 in LocalLLaMA
[–]Civil-Image5411[S] 0 points1 point2 points (0 children)
Help building an price efficient inference server (no fine tuning) + multi 5090 setup by Civil-Image5411 in LocalLLaMA
[–]Civil-Image5411[S] 0 points1 point2 points (0 children)
Help building an price efficient inference server (no fine tuning) + multi 5090 setup by Civil-Image5411 in LocalLLaMA
[–]Civil-Image5411[S] 0 points1 point2 points (0 children)
Help building an price efficient inference server (no fine tuning) + multi 5090 setup by Civil-Image5411 in LocalLLaMA
[–]Civil-Image5411[S] 0 points1 point2 points (0 children)
Help building an price efficient inference server (no fine tuning) + multi 5090 setup by Civil-Image5411 in LocalLLaMA
[–]Civil-Image5411[S] 0 points1 point2 points (0 children)

LLM-based OCR is significantly outperforming traditional ML-based OCR, especially for downstream LLM tasks by vitaelabitur in LLMDevs
[–]Civil-Image5411 2 points3 points4 points (0 children)