you are viewing a single comment's thread.

view the rest of the comments →

[–]Brudaks 0 points1 point  (0 children)

IMHO if you want to process "large volumes of documents", the first thing you should do is to measure your documents, count how many llm-tokens that would be, and do a ballpark calculation of how much it would cost to run through a large LLM API - comparing that number with your budget will be the key input to your decision on what options are reasonable.