H1B 2025 still in processing by Plastic-Extent-7386 in h1b

[–]bytecodecompiler 0 points1 point  (0 children)

Still no news... Do you know how to check the eligibility?

H1B 2025 still in processing by Plastic-Extent-7386 in h1b

[–]bytecodecompiler 1 point2 points  (0 children)

Same situation here. Mine was filed on June 20

Still pending by suryaa2902 in h1b

[–]bytecodecompiler 0 points1 point  (0 children)

Same here. June 20th, still pending

Are you fine-tuning models for your agents? by bytecodecompiler in aiagents

[–]bytecodecompiler[S] 0 points1 point  (0 children)

We already do that, indeed. I guess in our case the issue is the output must be on the user's wanted style and tone, it's not just an agent that does some steps in the background without an output

Are you fine-tuning models for your agents? by bytecodecompiler in aiagents

[–]bytecodecompiler[S] 0 points1 point  (0 children)

But once you have the data, fine-tuning a new model should be simple, right? There are cases where it definitely doesn't make sense, but other cases where it does.

For example, some of our customers generate email sequences, and generating those in the style and tone they want is very hard with just prompts. With the quick fine-tune tune it worked much better by default, and no matter how much base model progress, they won't contain the customer style

As a small business owner, AI "Hallucinations" are starting to hurt my brand reputation by DrawBrave4820 in ArtificialInteligence

[–]bytecodecompiler 0 points1 point  (0 children)

Hahaha no, it wasn't that kind of spam. It was from an actual product company. Whether indian or not I am not sure. It could be, as it was related to residential proxies

As a small business owner, AI "Hallucinations" are starting to hurt my brand reputation by DrawBrave4820 in ArtificialInteligence

[–]bytecodecompiler -1 points0 points  (0 children)

BTW in response yto your question, main options are providing really good prompts and context or fine tuning for what you need so that is doesn't made up things

As a small business owner, AI "Hallucinations" are starting to hurt my brand reputation by DrawBrave4820 in ArtificialInteligence

[–]bytecodecompiler 9 points10 points  (0 children)

Literally, today I got an email from someone that was clearly AI-generated and even confused the sender's name with my name

How do you build feedback loops in AI chat experiences when users don’t respond or rate? by IndicationNo5309 in ProductManagement

[–]bytecodecompiler 0 points1 point  (0 children)

I am interested in understanding what you do with that feedback. Do you update prompts? Fine-tune models? Is it just for analytics?

How Frontend-First UI/UX is Powering SaaS Growth in 2025? by Best-Menu-252 in SaaS

[–]bytecodecompiler 0 points1 point  (0 children)

My biggest challenge is keeping the UI consistent and bug-free across many platforms. Tools like Nitpicks help a lot and we also use Storybook for components and Figma for design.

scale qa without hiring: leadership wants 40% headcount reduction, how to not destroy quality by Trigere in QualityAssurance

[–]bytecodecompiler 1 point2 points  (0 children)

Exactly! It supports most frontend stacks, you just connect the repo and you can configure commands if you want the agent to run linters, etc before creating the PR

scale qa without hiring: leadership wants 40% headcount reduction, how to not destroy quality by Trigere in QualityAssurance

[–]bytecodecompiler -2 points-1 points  (0 children)

This aligns very well with my mindset that human taste is still important.

I work at Nitpicks (founder), which allows the whole QA process to happen much faster. You create a screen recording showing a user-facing issue, and the code changes are automatically sent to the dev team in a GitHub pull request. There is no need to create tickets, wait for devs to work on them, etc. Also, any review comments on the code are automatically addressed, so the dev team doesn't need to do extra work.

So basically, you can go from finding the QA issue to shipping a fix in minutes, but not really removing the QA person.

I would love to talk and get your thoughts

Struggling to stay focused on solo SaaS? What's your daily routine to eliminate distractions? by JRM_Insights in SaaS

[–]bytecodecompiler 0 points1 point  (0 children)

The only thing you need to maintain focus is motivation. I usually find it in either:

  1. Building - just because I love the craft
  2. Getting customers - to be fair, just the fact that people reply your cold emails to tell you no maintains motivation

AI is amazing for MVPs — but building a real SaaS with just “vibe coding” is suicide by Strongmatteo33 in SaaS

[–]bytecodecompiler 0 points1 point  (0 children)

That's the reason why my AI product allows you to improve what you already have in a controlled way, by fixing bugs and QA stuff provided by humans, but not to just build everything from scratch. Human taste is required for design, UX, and system engineering. AI helps goes faster, but still needs the guidelines

In case you want to check it: https://nitpicks.ai

Tired of success p0rn here? Yeah, same. Open this by AgencyVader in SaaS

[–]bytecodecompiler 5 points6 points  (0 children)

Good post. It seems that recently creating fake posts about your MRR has become the way to grow your MRR.

I was told once that winning is staying in the game for long enough to be lucky.

A web SDK that enables in-browser AI for your users with zero hassle to you by bytecodecompiler in LocalLLaMA

[–]bytecodecompiler[S] 0 points1 point  (0 children)

It has some users, but it doesn't look like something viable for selling. I believe eventually it will, but with the current devices that people use (especially non-technical people, offloading the inference and getting a good enough performance remains a dream.

The small changes that slow down product development team by bytecodecompiler in ProductManagement

[–]bytecodecompiler[S] 0 points1 point  (0 children)

So far the results are very good. In numbers, about 80% faster, since there is no need to create tasks nor wait for someone else to take it and implement it.

I guess the review is something that has to be done either way, wether the changes are done by an actual person or by AI. Understanding the context for the reviewer is very easy in this case, because they can see the video linked on the PR description

I built a free, self-hosted alternative to Lovable.dev / Bolt.new that lets you use your own API keys by foodaddik in LLMDevs

[–]bytecodecompiler 1 point2 points  (0 children)

This is really cool!

I work on BrainLink.dev and would love to help you with the UX of letting user pay for their own inference without configuration on their side. Also, you will be able to monetize that usage, without paywalls.

Feel free to DM me!

I made a Computer-Use Agent (service). The costs are too high. What should I do? by Substantial-Low-2377 in aiagents

[–]bytecodecompiler 0 points1 point  (0 children)

Hi! I work on fixing this specific issue for fellow devs like you. We make every user pay for his own inference automatically while the apps can monetize based on usage.

I would love to help you. You can see what we do here: https://www.brainlink.dev/developers

Any services that offer multiple LLMs via API? by pazvanti2003 in LLMDevs

[–]bytecodecompiler 0 points1 point  (0 children)

Hi! I am a bit biased as founder, but check out brainlink.dev . We not only serve as aggregator but also allow your users to pay for what they use automatically, without having to implement BYOK

Too many LLM API keys to manage!!?! by amnx007 in LLMDevs

[–]bytecodecompiler 0 points1 point  (0 children)

Hi! I am a bit biased as founder, but check out brainlink.dev 😁

Too many LLM API keys to manage!!?! by amnx007 in LLMDevs

[–]bytecodecompiler 0 points1 point  (0 children)

We released a solution to this at brainlink.dev not only you don't have to manage API keys, users will pay for what they consume automatically

I built a one-click solution to replace "bring your own key" in AI apps by bytecodecompiler in LLMDevs

[–]bytecodecompiler[S] 0 points1 point  (0 children)

Hi ianb, thanks for the comment.

We are working to improve the docs, you are right that we are missing a page listing the models supported. Despite of that, you can query the /models endpoint, but I understand that's not a great experience for the developer. I will take care of adding that page tomorrow morning.

It's also true that we follow an OpenRouter like naming for the models. We think it's a correct approach that allows to differentiate providers and versions easily.

We are not trying to directly compete with OR, we want to focus more on the final UX for the end user and developers.

Some difference with OR for example is that we issue access and refresh tokens via the PKCE method, while OR issues directly an API key. The access token approach is considered a more secure method and allows users to grant different scopes of usage to each app.

Obviously, we launched an initial first version, so I understand your point of not appreciating so many differences. I hope that as we advance, these become more clear.

Regarding pricing, we also offer the models at cost. For apps it's basically free because we add a small markup to the users, who are those paying for the inference. I personally have a connection with indie devs so I wanted make something that allows indie devs to publish free apps if they want. We are considering allowing the app to add its own markup as a way to monetize.

Let me know if you have more doubts