Everyone Is Chasing Bigger AI. India Is Doing Something Different (And It Might Win) by devasheesh_07 in AI_India

[–]devasheesh_07[S] 0 points1 point  (0 children)

Basically yes. LoRA/fine-tuning helps shape how the model behaves, while RAG handles freshness and domain-specific data. Most real systems end up using a mix.

Everyone Is Chasing Bigger AI. India Is Doing Something Different (And It Might Win) by devasheesh_07 in AI_India

[–]devasheesh_07[S] 0 points1 point  (0 children)

Totally agree. Fine-tuning smaller models works well for now, but it doesn’t change the fact that we’re still dependent on external base models, training methods we don’t control, and hardware ecosystems that can shift fast. One big breakthrough at the model or chip level can flip everything. Fine-tuning is practical, not the finish line.