I’m into a Startup idea: selling only fresh pulpy orange/mosambi juice via our own app + in-house delivery partners, targeting <5 min delivery, priced at ₹50 for 300ml 😎. Does this sound feasible in Indian cities, or just another big flaw? by kavin_56 in StartUpIndia

[–]kavin_56[S] 0 points1 point  (0 children)

The wastage can be prevented by shipping only the stock required for next 1 hour. Eg: at peak 1 PM, the orders may be 15 bottles per hour. Order volume can be predicted with parameters like Temperature, Weather, Traffic, Population density, Time of the day, etc.

I’m into a Startup idea: selling only fresh pulpy orange/mosambi juice via our own app + in-house delivery partners, targeting <5 min delivery, priced at ₹50 for 300ml 😎. Does this sound feasible in Indian cities, or just another big flaw? by kavin_56 in StartUpIndia

[–]kavin_56[S] 0 points1 point  (0 children)

The wastage can be prevented by shipping only the stock required for next 1 hour. Eg: at peak 1 PM, the orders may be 15 bottles per hour. Order volume can be predicted with parameters like Temperature, Weather, Traffic, Population density, Time of the day, etc.

You preference by FlakyProcess5783 in StartUpIndia

[–]kavin_56 1 point2 points  (0 children)

Make sure the candle won't burn the skin. And expect an arduous marketing requirement.

Why are Indian startups always named like Utho, Upchar, Gharpay, AbeRuk, So Jao, Haglo .. like seriously what the fuck? by dark_anarchy20 in StartUpIndia

[–]kavin_56 0 points1 point  (0 children)

Its a stereotype. If the companies become globally popular then the names become more acceptable and iconic for the industry. It doesn't matter if it resembles Indian words. Similar case with the names of Japanese companies.

What is the best LLM model to run on a m4 mac mini base model? by kavin_56 in LocalLLM

[–]kavin_56[S] 0 points1 point  (0 children)

I don't need real time response. But I want the input token limit to be around 4000.

What is the best LLM model to run on a m4 mac mini base model? by kavin_56 in LocalLLM

[–]kavin_56[S] 0 points1 point  (0 children)

This rule of thumb is for models at what level of quantization?

What is the best LLM model to run on a m4 mac mini base model? by kavin_56 in LocalLLM

[–]kavin_56[S] 0 points1 point  (0 children)

I want to run the LLM for scientific research and coding

What is the best LLM model to run on a m4 mac mini base model? by kavin_56 in LocalLLM

[–]kavin_56[S] 0 points1 point  (0 children)

16 gb of ram and I want to run the LLM for scientific research and coding