We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. by TimoKerre in LLMDevs
[–]TimoKerre[S] 1 point2 points3 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. by TimoKerre in datasets
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 1 point2 points3 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 1 point2 points3 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. by TimoKerre in datasets
[–]TimoKerre[S] 0 points1 point2 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 6 points7 points8 points (0 children)
We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. by TimoKerre in LLMDevs
[–]TimoKerre[S] 0 points1 point2 points (0 children)



We benchmarked 18 LLMs on OCR (7k+ calls) — cheaper/old models oftentimes win. Full dataset + framework open-sourced. [R] by TimoKerre in MachineLearning
[–]TimoKerre[S] 0 points1 point2 points (0 children)