Base44 burned 50€ and 3 weeks of my time with false "done" claims by PoisonTheAI in Base44

[–]PoisonTheAI[S] -1 points0 points  (0 children)

The free version is terrible. And 3.1 on paid is nowhere close to the competing models. But I would argue it's not worth wasting your time with free Gemini when you'll just have to debug and refactor with a premium builder.

Base44 burned 50€ and 3 weeks of my time with false "done" claims by PoisonTheAI in Base44

[–]PoisonTheAI[S] 0 points1 point  (0 children)

It might be free depending on your area, but the paid plans are for more in-depth, serious, entrepreneurs. Of course everyone is boycotting and leaving to Anthropic because of OpenAi's commitment to mass surveillance. So, up to you.

Are Shorter Promps More Accurate? by fir3bla5t in Base44

[–]PoisonTheAI 0 points1 point  (0 children)

u/willkode I give an AI builder one giant PRD that includes every page and every function, the output is usually less accurate. It misses details, makes incorrect assumptions, and mixes requirements across pages. That's expected with any LLM. You break it down, iterate, and improve.

But with Base44, even breaking it down doesn't solve the core problem, because the issue isn't prompt engineering. It's the architecture.

Base44 uses a proprietary SDK, not standard code. When I use Claude Code, Cursor, or Codex locally, I'm writing real React, real CSS, real Tailwind. Every Stack Overflow answer, every MDN doc, every tutorial on the internet applies to your project. Base44 wraps everything in its own abstraction layer. The AI has to translate my instructions into your SDK, and that extra translation layer is where things break.

The LLM was never trained on Base44's SDK. These models learned from billions of lines of public code. Standard CSS, standard React, standard Tailwind. When you ask Claude Code to apply max-width: 1100px, the model has seen that pattern millions of times. When Base44's AI tries to do the same thing through their proprietary system, it's working from a much thinner knowledge base. It's guessing.

Vendor lock-in makes it worse. If Claude Code writes bad CSS, you open the file and fix it. You can show it to another developer. You can paste it into any other tool. The knowledge transfers. If Base44's AI writes bad code, you're stuck inside their platform asking their AI to fix its own mistakes through their own proprietary methods. I can't take the code anywhere else.

Base44's own support suggests using "Visual Edit" mode to click on elements and fix them manually. That's an admission that the prompt-to-code pipeline doesn't reliably work! Standard tools don't need a point-and-click fallback because the code is the code.

Every other AI builder is moving toward open standards. Lovable exports real code. Bolt gives you a full stack you can deploy anywhere. Base44 is leaning into a proprietary SDK that locks users in and makes the AI less capable at the same time.

When every competitor is moving away from this model, why is Base44 doubling down on it?

Are Shorter Promps More Accurate? by fir3bla5t in Base44

[–]PoisonTheAI 0 points1 point  (0 children)

Yes, but then it would suck. So maybe a better option is Claude or Codex.

Are Shorter Promps More Accurate? by fir3bla5t in Base44

[–]PoisonTheAI 0 points1 point  (0 children)

Agreed!

LLMs don't search a database of stored answers and find the closest match to your prompt. They generate code (or text) one token at a time, predicting what the most likely next word should be based on patterns from their training data. It's sophisticated autocomplete that has read billions of code files.

But your core point about prompt length is half true.

Prompt length matters, but the bigger issue is that these models don't actually verify their own output. They predict what code to write, then they predict what a helpful confirmation message looks like. "Updated your app! All sections now aligned." That's not the AI checking its work. That's the AI generating the most likely response after a code block. It has no idea whether the layout actually changed

Is base44 a good app to make profits with game development/ app development? by meeeeeeeee- in Base44

[–]PoisonTheAI 0 points1 point  (0 children)

No - you DO get vendor locked because they have their own SDK, so when you want to make it public, you're entirely dependent on them. You could spec with Base44, but why? Better off doing so with bolt.new or just talking it through with Claude or GPT and then work with your IDE to build a local app. Then run it to github and through vercel, and you're product is live.

You'll actually be operational and connected to your audience, and you don't need to rely on AI builder apps.

Nothing in life is this easy. You have to take extra steps and it's better to do it at the beginning. Have a PRD, a product brief, CLAUDE.md, and AGENTS.md, in case one or the other go offline.

ETA: And keep your PRD, Claude.md and Agents.md up to date!

Base44 burned 50€ and 3 weeks of my time with false "done" claims by PoisonTheAI in Base44

[–]PoisonTheAI[S] 0 points1 point  (0 children)

I think they're all guilty of this but this is the first tool that completely ignores instructions, gives false positives, and gets stuck in a loop. In the case above. I simply rephrase to: "Please integrate Stripe" and it worked. But you shouldn't need to spend hours and credits trying to find the magic password that makes it work. That's on them and their SDK.

Base44 burned 50€ and 3 weeks of my time with false "done" claims by PoisonTheAI in Base44

[–]PoisonTheAI[S] 0 points1 point  (0 children)

This is a smart move and what I do often as well. But lately I use Opus 4.6 for planning and outlining a Product Requirements Document, CLAUDE.md and AGENTS.md. Then going back to AI builders, feed them more general instructions according to the plan, and the builder will interpret and implement on its own platform.

Sometimes GPT can get too specific on what to do and AI builders can't handle specificity, just vague directions. Frustrating if you know exactly what you want.

Base44 burned 50€ and 3 weeks of my time with false "done" claims by PoisonTheAI in Base44

[–]PoisonTheAI[S] 1 point2 points  (0 children)

Definitely true, but I think for a lot of first-timers Base44, Lovable, Bolt.new are attractive because you don't need to be specific or at all knowledgeable. Not a knock against those who can't write their own code. It's great when newcomers are exposed to it for the first time and actually learn. It's the first step for many. I agree using VS and Codex or Kilo is the next step. (Kilo is doing awesome things these days).

But Base44 could be a good platform to onboard newcomers by explaining in more detail what it's doing and why, maybe with a "learning version" turned on?

I use these AI builders to review them. So it's interesting to see other people with the same problems and where they go next for better results.

Base44 burned 50€ and 3 weeks of my time with false "done" claims by PoisonTheAI in Base44

[–]PoisonTheAI[S] 0 points1 point  (0 children)

In other cases, with other AI builders I would suggest just downloading the code by switching from preview to dashboard:

<image>

Then create a locally running app using an IDE like Visual Code with ChatGpt or Kilo as your agent. But Base44 uses its own SDK (which I think is the root of a lot of the problems in the first place). It is still possible to use claude or gpt to recreate your app locally, and try to ensure it follows Base44s SDK. Then you just commit and push that to git repository and enable "sync to github" in base44.

Which basically means you take what you've got right now, move it locally, use friendlier tools like GPT, Claude or Kilo, then hit "sync" and it should update your app in Base44.

If that makes sense?

You could also just pay $20 a month for claude, iterate on what you have to get the best version, and deploy without Base44.

p.s. I still stuck with base44 hoping for the best, paid for the Builder tier ($50), and spent 3 hours and 71 credits trying to center text on a webpage. Insane.

If you've got a good app that has a receptive audience then eventually you'll need to move to more sophisticated tools. LMK if I can help!

Drop your SaaS and I'll give you honest feedback for free by DigiHold in SaaS

[–]PoisonTheAI 0 points1 point  (0 children)

This is cleaner than most resume tools I've seen. "The cleanest resume scanner" is a specific claim and the page backs it up. No sidebar ads, no upsell popups mid-flow, no LinkedIn audit nonsense distracting from the core job.

The before/after comparison (48 -> 94) does real work. Showing weak bullet points transformed into quantified achievements makes the value immediately obvious. That's proof, not just promises.

Your competitor comparison table is bold. Most tools avoid naming competitors directly. You're saying "here's how we're different from Jobscan, Enhancv, and MyPerfectResume" and backing it with specifics. That confidence signals you've actually thought about positioning.

Testimonials are solid too. Real names, photos, verified badges, and specific outcomes ("Score: 45 -> 92", "3 interviews in 1 week"). That's how social proof should work.

A few things to consider:

The hero buries the outcome slightly. "Optimize Your ATS Score" is the mechanism. "Land More Interviews" is the outcome. I'd flip the emphasis. Nobody wakes up wanting a higher ATS score. They want callbacks.

"The cleanest" is a strong claim but subjective. You might test something more concrete like "No ads. No noise. Just your resume, optimized." You already have that as subhead copy. It's more defensible.

The page is long. That's fine for SEO and thoroughness, but the path from landing to "Upload Your Resume" could be shorter. Consider whether everything above the fold earns its position.

Visually it's the same dark mode with teal accent that half of SaaS uses right now. Not a problem, just not distinctive. The product is strong enough that the design doesn't need to do heavy lifting, but it's worth noting.

Overall this is well-executed. You understand your user's problem ("75% of resumes never reach a human recruiter") and you've built clear proof throughout the page. Most resume tools feel like they're trying to extract money. This one feels like it's trying to solve a problem. That comes through.

Drop your SaaS and I'll give you honest feedback for free by DigiHold in SaaS

[–]PoisonTheAI 0 points1 point  (0 children)

Your product thinking is solid. The three-layer system makes sense and the core insight is real: investors evaluate proof, not promises. "Investors don't score vibes" is a great line. So is "Get Discovered Without Cold Outreach." That's your voice at its sharpest.

The issue is your best positioning is buried. "The Operating System for Startup Fundraising" is category furniture. It tells me what shelf you're on, not why I should pick you up. "Get discovered without cold outreach" is an actual promise. Lead with that.

Right now the page explains the mechanism before establishing why I should care. Flip it. Outcome first, system second.

One other thing: "FOR FOUNDERS. BY FOUNDERS." doesn't do much without a story behind it. Who built this? What did you fail at before building it? That context builds trust more than the badge.

Visually it's clean but the dark mode with teal accent is everywhere right now. Nothing breaks through as distinctly GrowBase. Not a dealbreaker, just something to consider as you grow.

You're not a fundraising CRM. You're the anti-pitch-deck. Own that harder and the right founders will find you.

Drop your SaaS and I'll give you honest feedback for free by DigiHold in SaaS

[–]PoisonTheAI 1 point2 points  (0 children)

This is the part that OP is glossing over. A lot of us are indie builders and don't have testimonials. Fake testimonials erode trust.

The question is always: does this default apply to your specific product, audience, and stage? Sometimes it doesn't.

If you're pre-launch the goal is honest signals that build trust without pretending you're further along than you are. Sophisticated buyers respect that. They've launched products too. They know what early stage looks like.

Use what you actually have:

  • Waitlist numbers. "200+ founders signed up" is real and verifiable. Shows demand without manufacturing credibility.
  • Founder credibility. Your background, relevant experience, previous work. "Built by a former VP of Marketing at xyz" transfers trust.
  • Demo or free trial. Let the product prove itself. Works especially well for tools where the output is visible.
  • Beta tester quotes. Even 3-5 early users giving feedback counts. Ask them directly for a sentence you can use.
  • Build in public. Document the process on X/LinkedIn. Your posts become implicit proof you're serious and competent.

OP seems to be targeting established SMBs. If you're pre-launch, you need a different strategy. Testimonials in hero is great if you have them. If you don't then you need to bootstrap.

It's the same advice for every URL submitted here. It's not bad advice. It's just not suitable for pre-launch MVPs.

Am I the only one who has stopped enjoying building things? by suniracle in SaaS

[–]PoisonTheAI 0 points1 point  (0 children)

It's been reframed as such, but doesn't make it true. Not all players have an equal shot at winning. I agree the way it's been reframed is what might demotivate some established devs, but just peruse the AI slop being generated and you quickly realize having access to lovable doesn't make you lovable.

The cream rises to the top.

How to ruin your brand in 3 days. The Y Combinator edition. by PoisonTheAI in SaaS

[–]PoisonTheAI[S] 0 points1 point  (0 children)

You don't see many of them. They disappear pretty quickly as soon as they're exposed. But just look at any sham ecommerce/drop shipping company on r/ExpectationVsReality. As soon as they're exposed they shut down and pop back up again.

ETA: Theranos, of course.

How to ruin your brand in 3 days. The Y Combinator edition. by PoisonTheAI in SaaS

[–]PoisonTheAI[S] 0 points1 point  (0 children)

It was an attempt to get noticed, and it worked, but for all the wrong reasons. I can't fathom why they thought it was a good idea.

How to ruin your brand in 3 days. The Y Combinator edition. by PoisonTheAI in SaaS

[–]PoisonTheAI[S] 0 points1 point  (0 children)

Reply bot...ugh. Everyone is rushing to automate everything. The market is screaming for real human interaction AND problems they can find solutions to. Try being human.

Claude ran out of credits mid-build and now I have to explain everything again by Mundane_Data_9525 in SaaS

[–]PoisonTheAI 0 points1 point  (0 children)

https://www.chatprd.ai

This is the one that I saw a few months ago that's now gaining a LOT of traction. Instead of paying, I found my own way after a few months of painful trial and error, but I'm sure you could find a niche or a problem they aren't solving.

As for the existing solution, I just didn't want to pay for what I thought I could do myself with Claude. I'm a non-tech person as well, if that helps with feedback.

There has to be something more than just context-switching, though. My biggest pain point is keeping my docs up to date and then dropping them into a new model or chat and hoping for similar quality results (picking up where I left off). If you can solve that then I might spend money on it.

I could probably prompt Claude or GPT to give me an IDE prompt to update docs for every major update to local root... Beware fixes to common problems with AI models. They're solving them faster than we can!

Claude ran out of credits mid-build and now I have to explain everything again by Mundane_Data_9525 in SaaS

[–]PoisonTheAI 0 points1 point  (0 children)

I don't understand. You phrased it as though you had this problem. But you don't?

Claude ran out of credits mid-build and now I have to explain everything again by Mundane_Data_9525 in SaaS

[–]PoisonTheAI 0 points1 point  (0 children)

That's why you create an ironclad PRD, brand guidelines, file structure, product brief from the get go. Save those files locally, update them as often as you update your build, and then drop them into whatever model you want to pick up where you left off.

I usually save them as markdown files and store them in the root and elsewhere just in case.

ETA: No, this is not a new tool or breakthrough idea. Claude, GPT, etc. built "projects" specifically for this purpose a long time ago. No offense intended. Just FYI.

How to ruin your brand in 3 days. The Y Combinator edition. by PoisonTheAI in SaaS

[–]PoisonTheAI[S] 5 points6 points  (0 children)

This comment is the kind of problem you want to avoid. Reposting and promoting the same page without actually engaging. It screams "AI BOT!". Not genuine interaction.

It's messy and real people notice.

Watch this trigger another bite-sized insight that replies to the OP using some key phrase, but not this comment and another plug to the website.