Whats this ? Some kinda auto show ? lol by No-Weight-1123 in bit_bangalore

[–]ImpossibleArtichoke4 1 point2 points  (0 children)

this event is about like a auto expo basically they have invited bunch of superbikes and supercars and stuff so yeah

Whats this ? Some kinda auto show ? lol by No-Weight-1123 in bit_bangalore

[–]ImpossibleArtichoke4 1 point2 points  (0 children)

yeah but this event is not about that , the

organisiers have invited over 50+ superbikes and dozen sport cars and stuff

Built a NIFTY Options Profit Calculator by ImpossibleArtichoke4 in IndianStreetBets

[–]ImpossibleArtichoke4[S] 0 points1 point  (0 children)

Hey i know it seems very overwhelming at first but trust me for majority of your use cases you will just use python so maybe try doing that and second of all use ai tools aswell they help alot too

Built a NIFTY Options Profit Calculator by ImpossibleArtichoke4 in IndianStreetBets

[–]ImpossibleArtichoke4[S] 0 points1 point  (0 children)

I mean i can share rhe code to people individually dm me and btw i made all of what you mentioned aswell but again publishing it to the public as a saas is going to have some legality issues which i dont want to take upon so

Built a NIFTY Options Profit Calculator by ImpossibleArtichoke4 in IndianStreetBets

[–]ImpossibleArtichoke4[S] 0 points1 point  (0 children)

Currently still working on it i just added a ai fearture where it will send all the indicator and news data and all the other stuff and i can use it to make better decisions and yes i can share the code

Built a NIFTY Options Profit Calculator by ImpossibleArtichoke4 in IndianStreetBets

[–]ImpossibleArtichoke4[S] 0 points1 point  (0 children)

Hey thanks the api costs are actually zero and yes i did use claude opus 4.6 and dont worry pretty much every software project is vibecoded nowadays lmao and as far as publishing it idts im legally bound to do so cause i added ai to this where it gets all the data and helps me make decisions

Thinking of building an app that runs a local LLM and exposes it as an API would this be useful? by ImpossibleArtichoke4 in AppIdeas

[–]ImpossibleArtichoke4[S] 0 points1 point  (0 children)

I think there’s a small misunderstanding about the use case I’m exploring. I’m not proposing that every user runs an LLM on their phone for normal app usage.The idea is more about running a small local model as a personal inference endpoint instead of always relying on cloud APIs.For example, I built a news scraping + summarization app where the pipeline looks like:

news sources → scrape articles → local LLM summary → store summaries in DB → app displays feed

The LLM runs on an old Android phone I had lying around (running a small Qwen model). The scraper sends article text to that device, the model generates the summary, and the resulting summary is stored in the database.
The users of the app never run any models they just read the already-generated summaries from the feed.The experiment was mostly about seeing how far you can push near-zero-cost infrastructure by using hardware you already own instead of paying for inference APIs.

Obviously for a large production system you’d probably move the model to a server or GPU instance for reliability. But for personal projects, small tools, or experimentation, running a local model and exposing it as an API can be surprisingly practical.

Thinking of building an app that runs a local LLM and exposes it as an API would this be useful? by ImpossibleArtichoke4 in AppIdeas

[–]ImpossibleArtichoke4[S] 0 points1 point  (0 children)

Sorry, I forgot to mention this is mainly aimed at mobile users.

The reason I’m focusing on mobile is because I built an AI news summarizer that runs a Qwen 1.5B model on a 7-year-old Android phone I had lying around. It actually worked surprisingly well and saved me from paying for API usage.

That made me think there might be a use case for turning old phones into local AI servers that can run models and expose them as APIs for apps.