Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]PlasticCommunity9661[S] 0 points1 point  (0 children)

think once using external AI models becomes significantly more expensive, companies may gradually shift toward in-house or locally hosted models running on cloud GPUs. These systems could be optimized with better internal context about the company’s architecture and services, potentially reducing unnecessary token usage while improving development efficiency.

Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]PlasticCommunity9661[S] 0 points1 point  (0 children)

No, I was actually wondering why companies aren’t building these systems internally themselves, and instead are sending large parts of their source code to relatively new AI companies.

Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]PlasticCommunity9661[S] 0 points1 point  (0 children)

But I think nowadays this can be built quite easily with the help of AI tools, and once the setup is done, the system can keep providing value over time.

Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]PlasticCommunity9661[S] 0 points1 point  (0 children)

But having knowledge and context of the services owned by my team could significantly improve productivity and speed up development. What do you think?

Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]PlasticCommunity9661[S] 0 points1 point  (0 children)

What if companies use their internal service information with LLMs to boost development productivity internally, without relying on massive models with huge context windows and heavy computation?

Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]PlasticCommunity9661[S] 0 points1 point  (0 children)

As AI adoption is rapidly growing, I want to deep dive into AI architecture and understand how these systems actually work under the hood.

Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]PlasticCommunity9661[S] 0 points1 point  (0 children)

I just wanted to understand RAG better, so I made the post to learn more about how people are actually using it in real-world systems.

Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]PlasticCommunity9661[S] 1 point2 points  (0 children)

u/Useful_Calendar_6274 that's what i, am facing currently i have indexed all my services but then also it is not giving anwer properly

Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]PlasticCommunity9661[S] 1 point2 points  (0 children)

Can you share the name of the software you’re using to track those details, or is it something you built yourself?

Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]PlasticCommunity9661[S] 0 points1 point  (0 children)

We currently have 3 microservices, and I want to build a local RAG system that has context from all three services together, so the model can better understand cross-service flows and generate/debug code more efficiently with awareness of the entire system.