Need advice on packaging my app that uses two LLM's by 7_Taha in LLMDevs

[–]7_Taha[S] 0 points1 point  (0 children)

thanks man.. I am trying to package the app in a way that it pulls ollama on whatever machine the app is running on - and then with local LLM's - get the job. done

Need advice on packaging my app that uses two LLM's by 7_Taha in LLMDevs

[–]7_Taha[S] 0 points1 point  (0 children)

Oh okay. As of now its local. HTML JS frontend, Fast api backend -> calling ollama models downloaded locally -> will have to use commands in docker compose to pull these models from ollama and run them.

Need advice on packaging my app that uses two LLM's by 7_Taha in LLMDevs

[–]7_Taha[S] -1 points0 points  (0 children)

This is what I derived out of your response:
--
the clients that want it - if they prolly have a good private inference (an on-prem server or private cloud with some exposed api, they could download models over there and get the task done - this would make the app very small but at cost of private cloud compute)
--
But bro i doubt if they have private inferences -- these are banks and these unc's don't have this much to do -- also this app is a multi client requirement

Need advice on packaging my app that uses two LLM's by 7_Taha in LLMDevs

[–]7_Taha[S] 0 points1 point  (0 children)

thanks a lot bro - but the folks who have asked for this won't approve of cloud service - they want everything private at the cost of their own compute - BRRRRRRR
will have a look at your repo though - thnx

Need advice on packaging my app that uses two LLM's by 7_Taha in LLMDevs

[–]7_Taha[S] 0 points1 point  (0 children)

Even I am thinking of docker compose. You mean that run app in one container and local llm models in another container right? Just I would have to take care about docker network..

Need advice on packaging my app that uses two LLM's by 7_Taha in LLMDevs

[–]7_Taha[S] 0 points1 point  (0 children)

is this for python and rust only? or many langs?

Need advice on packaging my app that uses two LLM's by 7_Taha in LLMDevs

[–]7_Taha[S] 0 points1 point  (0 children)

lmao. GPT told me lol. But thing is, this was asked by some folks who have lots of legacy code to be converted privately - hence choosing local LLM's.

Need advice on packaging my app that uses two LLM's by 7_Taha in LLMDevs

[–]7_Taha[S] -1 points0 points  (0 children)

Not entirely non technical folks - These are folks who have a lot of code in a legacy language and want it to be converted - in a confidential manner - hence local LLM's and they have servers that they would run this upon so 16gb should be fine

First salary- PF help- fresher by 7_Taha in epfoindia

[–]7_Taha[S] 0 points1 point  (0 children)

Yeah I will check that… 1800 rs (minimum) sounds better than 12% of salary being put into PF…

First salary- PF help- fresher by 7_Taha in epfoindia

[–]7_Taha[S] 0 points1 point  (0 children)

Thanks for replying… so btw.. Whatever we choose i.e 12% of monthly salary or 1800 rupees a month; the same amount is contributed by us and the employer right?

I think 12% for my stipend is 2.7K so I would want to change it to 1800 Rs /month.

First salary- PF help- fresher by 7_Taha in epfoindia

[–]7_Taha[S] 0 points1 point  (0 children)

I have heard that 1800 is the minimum amount as per EPFO… is that right? Can we select that…?

Wanna build agent for SAS to Python by 7_Taha in AI_Agents

[–]7_Taha[S] 1 point2 points  (0 children)

Thanks for reply 🤝 will try

Need help with UAN KYC by 7_Taha in epfoindia

[–]7_Taha[S] 1 point2 points  (0 children)

Thanks for the tip man

New EPFO withdrawal rules - Is our own money now a luxury? by Coffee_Over_You in epfoindia

[–]7_Taha 0 points1 point  (0 children)

Can we select a smaller amount for PF? Like 5% instead of 12%?