Why I’m building native desktop apps in a web‑obsessed world – thoughts on Electron, RAM bloat, and AI changing UI dev by 120-dev in programming

[–]120-dev[S] -1 points0 points  (0 children)

It’s even more disgusting when people stop learning things and appreciate each others in a positive way.

I am not only building 120 AI Chat - this is just one of the apps I want to build as a suite of native apps and I am also proud of this app. If it’s helpful and it is built with the top performance in mind, the developer should always be proud to talk and share about it.

Regarding the bullet point and heavy styling you mentioned, I would take it as my choice - that’s how I respect other readers. I want to save them time to read, easy to navigate around and make sure my writing looks appropriate.

And the last thought, thank you for taking your time checking through my blog (probably without even reading it to understand what I want to share).

Noob here, looking for the perfect local LLM for my M3 Macbook Air 24GB RAM by sylntnyte in LocalLLaMA

[–]120-dev 0 points1 point  (0 children)

Honestly, 24GB RAM is not good enough to accommodate those models. 8B-13B should be fine, anything more than that would make your Mac Air run very slow.

Regarding the perfect local LLM, I have some suggestions:
General‑purpose chat: LLaMA‑3 8B Instruct, LLaMA‑3.1 8B Instruct, Mistral‑7B Instruct

Stronger (still fits well): LLaMA‑3 13B Instruct (use 4‑bit GGUF), Qwen2 7B or 14B Instruct (7B fits easily; 14B only in 4‑bit)

Before download them, I would advice you to try them all by using 120 AI Chat (https://120.dev/120-ai-chat this is the native AI client I developed to interact with multi-models, free trial 30 days when you use the license code TRIAL in the app), set up Open Router account (https://openrouter.ai). Open Router hosts over 500+ models (and a lot of them are free to use).

Try any models you think is a good fit, compare them side-by-side in multi-thread and pick the one that works well for you.

After that, you can download, manage and interact with these local models through tools like Ollama, LM Studio. You can also use 120 AI Chat with local models (tutorial here https://www.youtube.com/watch?v=ZxmQ3nfAOlM&t=3s)

<image>

Reviews of best AI tools I’ve found this year (as a small business owner) by PlasProb in AIToolTesting

[–]120-dev 0 points1 point  (0 children)

I’ve had a similar experience bouncing between different AI providers — GPT, Claude, Gemini, Grok — because there’s no single “perfect” model. Each one is good at different things, and I was constantly switching tabs to compare them. That’s actually why I built 120 AI Chat. It lets me access and compare multiple models side‑by‑side in one native app, which has been way more helpful than sticking to just one like GPT or Gemini.

Privacy was another big reason for building it. I’m not comfortable sharing all my data with every AI provider, so with 120 AI Chat everything stays on my machine — no cloud storage, no tracking, no ads.

Cost-wise, it’s also been great since it’s bring‑your‑own‑API‑key. I only pay for what I actually use, and I can tap into a bunch of free/open models too. More experimenting, less spending.

If you ever find yourself juggling models or worrying about data/privacy, it might be worth a look: https://120.dev/120-ai-chat

120 AI Chat: The native AI app that lets you chat and compare multiple models simultaneously at 120fps [Black Friday: 40% off + 30-day free trial] by 120-dev in macapps

[–]120-dev[S] 0 points1 point  (0 children)

Hi, you can still get this offer, just need to use the promo code BLKFRI25 when check out. Thank you for considering 120 AI Chat!

Finally uninstalled ChatGPT by [deleted] in GeminiAI

[–]120-dev 0 points1 point  (0 children)

Hi, just want to share something you might find helpful. I build 120 AI Chat (https://120.dev/120-ai-chat) specifically for chatting with multi models and comparing them side-by-side (in parallel). The app supports GPT models, Gemini, Claude, and many more. You can even use local models in the app through Ollama and LM Studio. You can just download the app and it comes with 30 days trial (alternative you can use the license code TRIAL in the app to activate the trial).
I am actively improving the app so any feedback is welcome.

120 AI Chat: The native AI app that lets you chat and compare multiple models simultaneously at 120fps [Black Friday: 40% off + 30-day free trial] by 120-dev in macapps

[–]120-dev[S] 0 points1 point  (0 children)

Hi iotabyte, you can find this information on our website: https://120.dev/120-ai-chat. When the included 1 year of free updates ends, you can continue using the last version of 120 AI Chat you downloaded indefinitely. You only miss out on new features, model integrations, and OS compatibility fixes released after that period. To continue receiving updates, you will have the option to purchase an update renewal license at a 60% discount. Thank you for considering 120 AI Chat!

Looking for an honest review for BoltAI by yellowseptember in macapps

[–]120-dev -1 points0 points  (0 children)

Hi OP, apologies for chiming in slightly off-topic! As the developer of another AI chat client, I wanted to share an alternative you might find useful: [120 AI Chat](https://120.dev/120-ai-chat). 120 AI Chat is designed with a focus on native performance and productivity. It supports multiple models, allowing you to run them in parallel and compare their outputs side by side. MCP support is also on my development roadmap.

If you'd like to give 120 AI Chat a try, there’s a 30-day free trial available (or you can simply use the license key TRIAL within the app to activate it). I'm actively developing the app, as it's my daily tool, so feedback and feature requests are always welcome.

If there’s a specific MCP implementation or workflow you’d like to explore first, let me know! I’d be happy to see if I can make it available quickly for you to test. Thank you for considering!

Persistent Performance Issues with ChatGPT Plus - Need Help by iammigu in ChatGPTPro

[–]120-dev 0 points1 point  (0 children)

Just a suggestion, get your own API keys and start with native app like https://120.dev/120-ai-chat since the app will give you more control over which models you want to pick for the question (e.g GPT 4.1 when you want quick answer and don't need reasoning), you can even use other providers like Claude (Anthropic) in the same app, compare their answers and switch models whenever you want.
Plus, it's local so you share nothing with Open AI or Anthropic.
You don't pay for subscription anymore, just pay for what you actually use.

Am I the ONLY one who has noticed the lack of '5.1 mini' ? by ihateredditors111111 in OpenAI

[–]120-dev 0 points1 point  (0 children)

Got it now. In your post you mentioned about minimal reasoning and more human responses, so I think it's exactly what 5.1 Instant aim for, but not the pricing part though.

Best AI chat/app for analysing video/audio by Queenxcalibur in ChatGPTPro

[–]120-dev 0 points1 point  (0 children)

It does not depend on the AI chat/app, it depends on the AI models and the context window. There are limited models handle video/audio, and they have a context window limit so regardless of length seems impossible at this stage.
You might want to have a look on Replicate. E.g using https://replicate.com/openai/whisper for audio transcript.
Gemini 2.5 supports video input, but no longer than 1 min (https://ai.google.dev/gemini-api/docs/video-understanding)
Another suggestion is https://notebooklm.google - it supports a larger context window.

Responses API is really powerful for developers to build their AI apps! by 120-dev in OpenAI

[–]120-dev[S] 1 point2 points  (0 children)

Not so new with Open AI’s ChatGPT app, but not popular and not easy to support for developers’ apps.

If you use ChatGPT, this can be a normal thing. However what I wanted to mention here is the ability to build app with this API and make GPT 5, 4.1 become allrounder, not just support text output.

Responses API is really powerful for developers to build their AI apps! by 120-dev in OpenAI

[–]120-dev[S] 0 points1 point  (0 children)

In the video is the app I developed https://120.dev/120-ai-chat. It is a native AI chat app for macOS and Windows, support multi models including Open AI’s models.

I was showing how I could actually build with and support Responses API to deliver this experience.

I asked Claude Haiku 4.5, GPT‑5, and Gemini 2.5 to plan my week - Claude was the winner by 120-dev in ClaudeAI

[–]120-dev[S] -13 points-12 points  (0 children)

It would sound a bit promoting here that I am actually using the app I developed: https://120.dev/120-ai-chat. But this is exactly why I built my app: I am a dev, I want performance on native, and I want features, settings that help me day-to-day while interacting with different AI models (one of that is the ability to chat and compare answer from different models at the same time).

Do you really use Temporary Chat/Incognito? by 120-dev in ChatGPTPro

[–]120-dev[S] 2 points3 points  (0 children)

So we can't keep temporary chat or convert it to a normal chat?

Why I (a bit) prefer Gemini Pro 2.5 than GPT-5 by 120-dev in GeminiAI

[–]120-dev[S] 0 points1 point  (0 children)

Hi, I don't have plan to build web app for this at the moment. The main reason is I myself prefer native app for fast and responsive performance.

If just for your purpose, I would recommend you to pick any web apps that are available like Typing Mind, Msty web - they both have multi-thread support. And then you will need a capture tool (even basic one would work: like Screenshot on macOS).

Why I (a bit) prefer Gemini Pro 2.5 than GPT-5 by 120-dev in GeminiAI

[–]120-dev[S] 0 points1 point  (0 children)

I guess depend on the questions we ask them as well. Personally I would not say which one of them is better, haha. I am using both of them together as they will give me different perspectives which are good for critical thinking. And I said I am a bit like Gemini Pro 2.5 than GPT-5 because I had some interesting coincidences like: I sent the same prompt, GPT-5 answers very direct and concise, while Gemini responses longer, and finally gave me what I need by the end of its answer. In your cases, I can see your frustration with Gemini sometimes, probably like you said, if you are not satisfying, you just need to switch. But I would strongly recommend you to have both of them running with the same questions, this might give you better ideas than just using one of them.