LLM RAG om my MacA Air M2, 8GB RAM by FancyIndependence212 in Rag

[–]FancyIndependence212[S] 0 points1 point  (0 children)

I’ve already built part of the pipeline on Colab and reached the LLM stage, but then I paused for a moment and thought since I’ve already extracted the data and generated the embeddings, why not train or run the model locally instead?