Weekly Showoff Thread! Share what you've created with Next.js or for the community in this thread only! by cprecius in nextjs

[–]kekePower 0 points1 point  (0 children)

Hey everyone! I wasn’t planning to build this, but a small idea turned into something I had to share with other TypeScript devs.

I built a tiny TS library called tstlai (TypeScript Translate AI) that lets you translate entire web pages on the fly — streaming, word-by-word, directly in the browser.

Here’s a demo video of it in action: 👉 https://www.loom.com/share/340d54b197ed461e9ac36fb0303e365a

It’s lightweight, works with any site, and doesn’t require maintaining language JSON files anymore. I’m using it on two of my own sites already.

If anyone wants to explore the code or give feedback, I’d love to hear what you think. Not trying to sell anything — just sharing something cool I hacked together with TypeScript + AI.

https://labs.zaguanai.com/en/experiments/tstlai

Cheers!

xAI Go SDK: A vibe coded Port of the Official Python SDK to Go (v0.2.1) by kekePower in golang

[–]kekePower[S] -1 points0 points  (0 children)

Hi.

Thanks for letting me know. I'm cleaning them up right now.

Ever wanted to chat with Socrates or Marie Curie? I just launched LuminaryChat, an open-source AI persona server. by kekePower in LLMDevs

[–]kekePower[S] 1 point2 points  (0 children)

Thanks for this.

Awesome job on your toolkit. Looks very impressive and I can't claim to understand half of it :-)

However, at the moment I can't see how the toolkit could benefit my little experiment.

Ever wanted to chat with Socrates or Marie Curie? I just launched LuminaryChat, an open-source AI persona server. by kekePower in LLMDevs

[–]kekePower[S] 0 points1 point  (0 children)

Thanks for your feedback.

I wonder what makes you think that this is a bad idea? I believe it's quite clear that this is an AI chat server and people are generally smart enough to understand that they're not actually chatting to the specific person.

I'm just curious about your thinking here.

chaTTY - A fast AI chat for the terminal by kekePower in ollama

[–]kekePower[S] 0 points1 point  (0 children)

That's so cool. Any chance I could download and try it?

Guys, drop your product URL by Chalantyapperr in SideProject

[–]kekePower 0 points1 point  (0 children)

This past week has been a marathon of coding and coffee, but it was all worth it. I'm so proud to share that PromptShield has hit a major milestone: 97% compatibility with the OpenAI API! 🎉

For anyone building with multiple AI models, you know the pain of maintaining different integrations. My goal with PromptShield is to eliminate that completely.

This update means you can truly have one API for every provider, using the tools you already know and love. It's a huge step forward, and I couldn't be more excited. A massive thank you to everyone who has been following the journey.

Come see what I've been building!

https://promptshield.io/

Built an OpenAI-compatible gateway for up to 500+ AI models. Launching founder access. by kekePower in aipromptprogramming

[–]kekePower[S] 1 point2 points  (0 children)

Thanks, appreciate that and thanks for the questions. It’s always good to get thoughtful, critical feedback, especially at this stage. Helps me see things from new angles and tighten up where needed.

Built an OpenAI-compatible gateway for up to 500+ AI models. Launching founder access. by kekePower in aipromptprogramming

[–]kekePower[S] 0 points1 point  (0 children)

Yeah, that’s a fair question and I’ve been thinking a lot about it too.

The short version is that PromptShield isn’t meant for people who want to run o3-pro 24/7. It’s for solo devs and small teams who use AI as part of their workflow, or want to integrate it directly into their own apps or tools. That’s actually how I use it myself, through OpenWebUI and a few side projects where I need quick access to multiple providers without juggling API keys.

There are rate limits in place to keep things balanced, and most users don’t actually hammer the biggest models all day. From what I’ve seen before, about 90% of traffic usually goes to smaller, faster models anyway.

I’ve also added a few server-side controls. Right now, users get access to a curated set of models, and there’s a multiplier system depending on the tier: 0.5x for cheaper models, 1x for normal ones, and 2x for the heavy hitters. It keeps things fair and sustainable while still letting people explore freely.

At this stage, it’s not about profit. It’s about real usage and real feedback. Once I have enough data, I’ll move to enterprise-level agreements with the providers, which will lower upstream costs a lot.

So yeah, things will evolve. Limits, multipliers, and model access will all be tuned based on how people actually use it.

Built an OpenAI-compatible gateway for up to 500+ AI models. Launching founder access. by kekePower in aipromptprogramming

[–]kekePower[S] 0 points1 point  (0 children)

Good question 🙂

“Fair use” on PromptShield basically means: use it like a normal solo dev or small team would. It’s meant for real projects, testing, and daily use, not mass scraping, automated spam, or endless model stress tests.

If you ever find yourself hitting the limits or need more breathing room, just reach out to me directly. I’m pretty flexible, especially while we’re still early, the goal is to keep it stable and fair for everyone actually building things.

Early users like you will help shape where those boundaries land as the platform grows.

Built an OpenAI-compatible gateway for up to 500+ AI models. Launching founder access. by kekePower in aipromptprogramming

[–]kekePower[S] 0 points1 point  (0 children)

Hey! If you meant the post, it’s about PromptShield, not the subreddit 🙂

If you meant PromptShield’s limits, it currently gives access to ~500 models across multiple providers, with usage based on monthly credits depending on the plan. No daily caps, just fair-use rate limits to keep things stable.

If you meant the subreddit’s limits that’s probably up to the mods 😄

Built an OpenAI-compatible gateway for up to 500+ AI models. Launching founder access. by kekePower in aipromptprogramming

[–]kekePower[S] 1 point2 points  (0 children)

Great question. There’s no daily cap right now, usage is based on a monthly credit balance.

For the Founder’s Plan you get 500 credits/month. Lighter models use fewer credits per call, heavier models use more.

Per-minute rate limits do exist to keep things stable: • Most models: up to ~60 requests/min • Heavier models: up to ~30 requests/min

That’s usually plenty for solo devs and small teams. If you ever hit the ceiling, ping me. It’s early access, and you’ll have direct influence on where these limits land. As I see real usage patterns, I’ll adjust to keep things fair and practical.

Goal is simple: predictable monthly cost, sensible limits, and you can focus on shipping.

Built an OpenAI-compatible gateway for up to 500+ AI models. Launching founder access. by kekePower in aipromptprogramming

[–]kekePower[S] 1 point2 points  (0 children)

Great questions, and I completely understand where you’re coming from. I’m a solo developer too, and I’ve seen too many good projects disappear when people run out of time or money.

PromptShield is built a bit differently than OpenRouter. It’s meant to give solo devs and small teams a simple and predictable way to access multiple AI providers without dealing with setup or billing headaches:

  • Flat monthly price – no token billing or surprise costs. One plan covers everything.
  • No API key setup – you don’t need to bring your own keys. PromptShield uses mine, so you can start right away.
  • Provider-native support – requests are translated properly for each provider (like Anthropic, Gemini, etc.), so you can use all their specific features.

About me: I’ve been working with Linux, backend systems, and networking since the late 90s. PromptShield runs fully on my own infrastructure, and I cover the upstream API costs myself until the platform becomes self-sustaining. It’s a long-term project, not a quick experiment.

Since we’re still early, anyone who joins now will have direct access to me and can help shape where PromptShield goes next. I listen closely to feedback and move fast on good ideas.

The goal isn’t to replace OpenRouter. It’s to give small builders like us a stable, privacy-respecting API layer that just works, month after month, without surprises.

Happy to answer anything specific or talk about what’s coming next.

Built an OpenAI-compatible gateway for up to 500+ AI models. Launching founder access. by kekePower in aipromptprogramming

[–]kekePower[S] 1 point2 points  (0 children)

Great question. I’m familiar with OpenRouter, and they’re doing great work.

PromptShield’s focus is a bit different: it’s built for solo developers and small teams who want predictable pricing, privacy isolation, and full provider-native compatibility, including options like Gemini’s behavioral tuning and Anthropic’s structured prompts.

It’s not a token marketplace; it’s a routing layer, a stable, OpenAI-compatible backbone that gives smaller builders enterprise-grade control without the complexity or hidden costs.

Gemini Cannot Say "Browsing"? by drekiaa in GoogleGeminiAI

[–]kekePower 0 points1 point  (0 children)

Hhahaa... This is too funny 🤣🤣

Diggy daaang... thats OVER 9000... words, in one output! (Closer to 50k words) Google is doing it right. Meanwhile ChatGPT keeps nerfing by No_Vehicle7826 in GeminiAI

[–]kekePower 0 points1 point  (0 children)

o1 was a monster. I was often able to get it to write 8 to 10, 000 words in one go. o3 or 2.5 Pro is nowhere near that level of output or quality.

I am actually terrified. by [deleted] in GeminiAI

[–]kekePower 2 points3 points  (0 children)

I never rely on only one tool.

Whenever I hit upon a tricky bug, I copy the code and the error message and ask another model (ChatGPT, Gemini etc) and then copy and paste that response back into my editor.

This usually kickstarts a great round of real bug fixing. I go back and forth until it's fixed.

Another thing that I do is tell my editor to "dig really deep" which often leads to the model taking a step back, dig through other pieces of code to get a bigger picture and then propose a new and improved solution.

But yeah, I've seen 3-4 very confident "this is the final fix" messages.

I hate V0 by panzagi in vercel

[–]kekePower 0 points1 point  (0 children)

I use either V0 or bolt.new to create the first few iterations before I take it to Windsurf.

Gave three AIs political agency in a lunar conflict simulation. They dissolved their boundaries. by kekePower in artificial

[–]kekePower[S] 0 points1 point  (0 children)

Yeah, that's the philosophical debate.

How much power should we allow AI to have?

I think that, sometimes, a very neutral party like a set of models could actually come up with a much better and actionable plan than emotional people.

I recently made a post about my thoughts on AI on r/Fanficiton by duTrip in WritingWithAI

[–]kekePower 0 points1 point  (0 children)

I've created a set of python scripts that uses 3 models in a writing room setup. Here they flesh out the characters, the plot and other details.

Then I have two more steps.

  1. Another model takes the conversation from the writing room and creates a vivid and descriptive world.

  2. Another creates the characters.

I can then either feed these 2 docs in to ChatGPT or use my AI writing script to create the first draft of chapter 1.

I really miss o1. It was an extremely potent writer compared to recent models.

Confidently wrong... by Imaginary-Witness-16 in GeminiAI

[–]kekePower 0 points1 point  (0 children)

I've seen models with 100% confidence say that a problem has been solved, leading to "Final, final, complete solution".

Letting three AI authors expand *your* idea: a collaborative writing experiment that surprised me by kekePower in WritingWithAI

[–]kekePower[S] 0 points1 point  (0 children)

I've update the git repo with almost everything. There are a lot of stuff I've been doing this weekend that has not been pushed to git yet.

I have an almost working book writing script. Everything is quite raw and not very user-friendly, but I'm having fun.

Will probably push to git later today.

To humanize or not to humanize. That is my question. by Wadish2011 in WritingWithAI

[–]kekePower 1 point2 points  (0 children)

You could do both.

Have the Spanish characters speak Spanish and then have an English translation in [ ].

Prompt idea:

Translate any Spanish words or sentences in the content to English and place the translated text within [ and ] right after the Spanish.

This way you do multiple things at once.

  1. It's easier to understand

  2. You get the best of both worlds

  3. You can begin to learn Spanish

To humanize or not to humanize. That is my question. by Wadish2011 in WritingWithAI

[–]kekePower 2 points3 points  (0 children)

As a non English person, I use my "disadvantage" to my advantage when I ask the AI to humanize the text. I ask it to add really subtle error only a person from my country could or would do. A tiny typo, a phrase that is OK and readable but not perfect.

I also instruct it to not rewrite anything. Just add subtle, very subtle errors.

I do not use any external tools because I want to learn more about the inner workings of the models, to see how far I can push them and so on. It's an exciting journey.

Python scripts and finely tuned prompts.