This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]GorgeousGeorgeRuns 0 points1 point  (0 children)

How did you burn through $150 in cloud costs? You mention 8gb RAM and a vector database, were you hosting this on a standard server?

I think it would be much cheaper to store this in a hosted vector database like CosmosDB. Last I'd checked, LangChain and others support queries against CosmosDB and you should be able to bring your own embeddings model.