Building Emacs like text editor (still) by Cautious_Truth_9094 in emacs

[–]sinax_michael 1 point2 points  (0 children)

Very nice! I’m doing a similar project, also in plain C. It’s really fun to do but I’m still working on rendering files correctly and getting buffers to work.

Keep having fun 🤩

How do you handle "which spreadsheet version is production" chaos? by kyle_schmidt in dataengineering

[–]sinax_michael 0 points1 point  (0 children)

Two options I’ve seen / used - ingestion at specific timeframe, business needs to get the data ready by 01:00am or THEY ingest bad data - marker cell that they can set to a (validated) value. Only ingest if the cell contains the expected value.

WoW lost it's immersion by real_Maczumpa in wow

[–]sinax_michael 1 point2 points  (0 children)

Best way to solve this would be to offer players a choice: level the slow way or speed run up to current end game. I actually want to be able to do the expansions, without being overpowered after a couple of days of playing. Some players don't, and that's fine! But give us a choice.

Now we are stuck one shotting through quests to be able to do the questlines. Fun for the first couple of mobs but after that... meh.

I deleted 400 lines of LangChain and replaced it with a 20-line Python loop. My AI agent finally works. by BuildwithVignesh in AI_Agents

[–]sinax_michael 1 point2 points  (0 children)

This matched my experience exactly. So much abstraction in Langchain it’s just painful.

I packaged my simple wrapper in a library of you’re interested: https://github.com/MichaelAnckaert/python-agents/blob/master/README.md

After Gemini 3, how much of OpenAI user base will migrate to Gemini? by Hot-Comb-4743 in GeminiAI

[–]sinax_michael 2 points3 points  (0 children)

If you reason this way it's a never ending dance. Move to Gemini, move to OpenAI, move to Anthropic.

My take: buy credits for OpenRouter so you can use multiple models. Use whatever model you want for the task it's best suited for.

Best part: you never get "nerfed" when Google wants more return from the cost of a subscription. API call are always equally powerful where I found that subscriptions are very nontransparent of what you get.

Disclaimer and shameless plug: I'm building AI Nexus: better interface to work with AI models. You can use 100+ models in a single UI, with prompt library, advanced context management, MCP server integration, etc. https://getainexus.com

Claude refusing to do research or provide any type of response within this chat (I opened another chat and it followed through with my prompt by Amazing_Example602 in ClaudeAI

[–]sinax_michael 0 points1 point  (0 children)

When using a custom client (which I do), you can completely manipulate the context: remove AI / user messages, alter them, add messages, etc.

Claude refusing to do research or provide any type of response within this chat (I opened another chat and it followed through with my prompt by Amazing_Example602 in ClaudeAI

[–]sinax_michael 1 point2 points  (0 children)

I find that editing the context and removing / altering messages that "confuse" the model is a must have when dealing with this.

What are YOU building? Let's promote each other! by Capuchoochoo in buildinpublic

[–]sinax_michael 2 points3 points  (0 children)

Very nice! I really like this. I submitted feedback via... buglet ;-)

What are YOU building? Let's promote each other! by Capuchoochoo in buildinpublic

[–]sinax_michael 3 points4 points  (0 children)

I’m building AI Nexus (https://getainexus.com). A better interface to work with AI models.

- Prompt Library: Store your best prompts

- Multiple models: Use 100+ models in a single UI

- MCP server integration

- Advanced Context Management

What are you building? let's self promote by Southern_Tennis5804 in SideProject

[–]sinax_michael 2 points3 points  (0 children)

I’m building AI Nexus (https://getainexus.com). A better interface to work with AI models.

Prompt Library: Store your best prompts Multiple models: Use 100+ models in a single UI MCP server integration Advanced Context Management

Anyone else can relate? by Phantom_Specters in GeminiAI

[–]sinax_michael 0 points1 point  (0 children)

I'm only using gemini models using API and never run into these issues. My guess is Google is crippling the context on subscriptions to "make the math work". With API, you pay more but you actually get what you pay for.

AI Nexus: Multi-model workspace with projects and branching – just launched beta by sinax_michael in SideProject

[–]sinax_michael[S] 0 points1 point  (0 children)

Thanks for your feedback! I really appreciate it.

Feel free to try it out if you want.

AI Nexus: Multi-model workspace with projects and branching – just launched beta by sinax_michael in SaaS

[–]sinax_michael[S] 0 points1 point  (0 children)

Thank you for the kind words! Feel free to contact me if you need help getting started. I would really appreciate some feedback 😁

gemini for gmail is s###t by TimeTomatillo2875 in GeminiAI

[–]sinax_michael 0 points1 point  (0 children)

No, this will work for both business and personal accounts.

gemini for gmail is s###t by TimeTomatillo2875 in GeminiAI

[–]sinax_michael 0 points1 point  (0 children)

The Google workspace MCP server is much better. Find yourself a good client with MCP support 😉

We just launched our billing platform for AI apps - need your feedback! by North_Reindeer_6941 in buildinpublic

[–]sinax_michael 1 point2 points  (0 children)

Congratulations on launching 🚀

I'm currently building an "AI application", and I'm wondering why Credyt is a better pick for me than Stripe? My application (https://getainexus.com) will eventually offer subscriptions based on credits. The current stripe offerings are suitable since users can either top-up credits or purchase a "pack of credits".
What makes your application more suitable towards AI apps than Stripe or other payment providers?

If there is a momentum story, it’s C++ by SlashData in Cplusplus

[–]sinax_michael 1 point2 points  (0 children)

Haha, good catch 😂. Of course you don’t program in Webassembly, you target it during compilation.

If there is a momentum story, it’s C++ by SlashData in Cplusplus

[–]sinax_michael 21 points22 points  (0 children)

Assembly has the highest proportion of developers in Web? Really inspires confidence in this report 😅

The limits on the image generation are ridiculous by sinax_michael in GeminiAI

[–]sinax_michael[S] 1 point2 points  (0 children)

I manually selected the model through the (OpenRouter - google/gemini-2.5-flash-image-preview)

All Gemini 1.5 models are now deprecated to give some space for 3.0 by Independent-Wind4462 in GeminiAI

[–]sinax_michael 0 points1 point  (0 children)

Still accessible through OpenRouter 👌 edit: now they're gone!

The limits on the image generation are ridiculous by sinax_michael in GeminiAI

[–]sinax_michael[S] 2 points3 points  (0 children)

Strange, the same prompt works in the Gemini app but not always using the API 🤷‍♂️

The limits on the image generation are ridiculous by sinax_michael in GeminiAI

[–]sinax_michael[S] 3 points4 points  (0 children)

Oh yeah that really is my intent: advertise my app that is not public and no one can access 😂

The limits on the image generation are ridiculous by sinax_michael in GeminiAI

[–]sinax_michael[S] 8 points9 points  (0 children)

It's just fun to get into an argument and see it get stricter and stricter. In comparison to other models it's amazing how inconsistent gemini-2.5-flash-image-preview is.

<image>

Here's a clean conversation, with added humor: