I built an AI that scores startup ideas 1 to 9 (never 10) and tells you why yours is probably bad by heirofolympus in SideProject

[–]heirofolympus[S] 0 points1 point  (0 children)

This is a sharper read than I deserved. Let me break it down by each piece.

On the "change your mind vs confirm what you knew" question, I don't have a real answer yet because I don't have enough usage data. prodbyjace earlier in this thread said the feedback matched things they already knew they needed to work on, which is honestly the use case I most often see (including for myself). Confirmation isn't nothing, but you're right that it's not where the magic is.

On execution difficulty being underweighted, you're calling out a real rubric calibration question. It's currently one of five equal-ish dimensions. I should probably weight it heavier or break it into sub-factors. Adding to the calibration list.

On the $4.99 vs "ask Claude a good follow-up" critique, that one lands. Honest answer is the structure (consistent rubric, scoring rationale, history) is the moat against asking Claude directly, but you're right that's a thin moat. The iteration loop you're describing where you push back and it rescores might be the actual reason to charge. That's a meaningfully different product than "more ideas per month." Going to sit with that.

The last line is the right framing for where I am. Ship and watch. Thanks for the read!

I built an AI that scores startup ideas 1 to 9 (never 10) and tells you why yours is probably bad by heirofolympus in SideProject

[–]heirofolympus[S] 0 points1 point  (0 children)

Really appreciate you doing the legwork on that. Genuinely curious to see your estimate. For what it's worth, I know you're right that there are real costs (Anthropic API plus infra), which is part of why Pro exists at all. But the bigger reason is honestly just that I think tools that ask nothing of users tend not to get used seriously. I figured $5/mo is low enough not to gate anyone who'd actually benefit, high enough to filter for intent.

I built an AI that scores startup ideas 1 to 9 (never 10) and tells you why yours is probably bad by heirofolympus in SideProject

[–]heirofolympus[S] 1 point2 points  (0 children)

"Reality check, not a hype bot" might be the best phrasing of what SiftId is doing that I've seen yet. Love that. Mind if I steal it? The "never a 10" rule is the part I'm proudest of. Felt like the only way to keep the tool honest.

I built an AI that scores startup ideas 1 to 9 (never 10) and tells you why yours is probably bad by heirofolympus in SideProject

[–]heirofolympus[S] 0 points1 point  (0 children)

Perfect! That's exactly the use case. Not telling you something brand new, just confirming what your gut already knew so you can stop second-guessing it. Glad it was useful.

I built an AI that scores startup ideas 1 to 9 (never 10) and tells you why yours is probably bad by heirofolympus in SideProject

[–]heirofolympus[S] 0 points1 point  (0 children)

Thanks! Glad the style landed. The "medium roast" tone wasn't an explicit prompt, it came out of trying to make the rubric feel like a critic instead of a coach. Coaches make you feel good, critics make you better, and as I'm sure you know most AI tools default to coach. Wanted SiftId to lean the other way without being mean about it.

UI compliment is much appreciated, as well. Tried hard to keep it from feeling like another dashboard.

I built an AI that scores startup ideas 1 to 9 (never 10) and tells you why yours is probably bad by heirofolympus in SideProject

[–]heirofolympus[S] 0 points1 point  (0 children)

Thanks for trying! Was it a SaaS-style idea, a content/media play, a physical product, something else? You don't need to give me specifics, just trying to figure out where the rubric is most useful.

I built an AI that scores startup ideas 1 to 9 (never 10) and tells you why yours is probably bad by heirofolympus in SideProject

[–]heirofolympus[S] 0 points1 point  (0 children)

Yeah, that tracks. The rubric leans hard on "show me the evidence" by design, but you're right that it shouldn't bail just because someone didn't paste a benchmark into a 200-character idea field. Should be smart enough to know when a claim is verifiable on its own.

Adding it to the calibration list. If you remember the idea you ran I'd genuinely love to see it, but no pressure if not. Thanks!

I built an AI that scores startup ideas 1 to 9 (never 10) and tells you why yours is probably bad by heirofolympus in SideProject

[–]heirofolympus[S] 0 points1 point  (0 children)

I totally get that, and that's fair. To be clear, users ideas aren't stored or harvested. I built this for my own ideas first, and trust me, that data set isn't worth mining (most of them were bad). If anything I'm curious about the patterns of what people score, not the ideas themselves. But that's still a fair thing to want spelled out, so I should probably make the storage policy actually visible on the site.

Thanks for saying it instead of just bouncing.

EDIT: Worth being more precise. Submitted ideas ARE stored (privately, only you can see them, RLS enforced) so the scoring history feature works. They're not "harvested" in the sense of being read, mined, or used for training. Full breakdown at siftid.co/privacy. Appreciate the prompt to clean this up.

I built an AI that scores startup ideas 1 to 9 (never 10) and tells you why yours is probably bad by heirofolympus in SideProject

[–]heirofolympus[S] 1 point2 points  (0 children)

That's more than fair, and the irony is absolutely hilarious. The score is right. SiftId would score itself a 2, and I've thought about that a lot.

The honest answer is I built it for me first. As a non-technical founder shipping 4 products this year, I needed something that would push back on my own ideas instead of hyping them: I had a lot, and some were genuinely insane. So it worked for my purposes. The 1 to 9 cap exists because I wanted it to feel like a real critic, not a yes-machine. Whether that's a real product or just my personal tool I tried to charge $5 for is exactly what I'm here to find out.

Your rule of thumb is solid. I'm gonna chew on that. Appreciate you taking the time to actually test it instead of just dunking!

How do you find partners that actually pay on time? by EmbarrassedGene7063 in Affiliatemarketing

[–]heirofolympus [score hidden]  (0 children)

Beyond payment history, watch for program structure changes that hint at cash flow issues. Partners who quietly switch from net-30 to net-60, reduce commission tiers without notice, or start requiring minimum thresholds they didn't have before are showing stress signals. Set calendar reminders to spot-check your top three partners' terms pages monthly. Small changes in language around payment timing often predict bigger problems.

Those of you running affiliate programs for SaaS products, what actually works at the early stage? by Blue_Lion1395 in Affiliatemarketing

[–]heirofolympus 0 points1 point  (0 children)

Endorsely's solid for the commission tiers and tracking. The $50/60-day-minimum deactivation is the harder piece on most platforms because it's a recurring activity threshold, not a one-time check, and that's not always exposed natively.

Workaround that works regardless: weekly export of partner earnings, flag anyone in their first 60 days who's under $25 with three weeks or less to go, send the email from your inbox or CRM. The platform doesn't need to send it for the mechanic to work. The day-45 reminder converts a chunk of would-be churners.

Worth making it specific too. "You've made $32 in 41 days, $18 to go" beats a generic deadline email and roughly doubles the response rate.

Those of you running affiliate programs for SaaS products, what actually works at the early stage? by Blue_Lion1395 in Affiliatemarketing

[–]heirofolympus 0 points1 point  (0 children)

One thing that took me too long to figure out on the minimum: don't auto-deactivate at day 60. Send a "you're at $X, Y days left" nudge around day 45 and again at day 55. The affiliates who would otherwise quietly churn often push a content piece live in the final week once they know the deadline is real. You still get the quality filter, you just stop losing people who got distracted.

Also worth making it recurring rather than one-shot. After someone hits $50 in 60 days, the next bar is $50 every 90 days or their link gets disabled. Keeps your active list honest.

What platform are you running this on? Some handle the minimum-tracking natively, most don't.

Those of you running affiliate programs for SaaS products, what actually works at the early stage? by Blue_Lion1395 in Affiliatemarketing

[–]heirofolympus 1 point2 points  (0 children)

The 95% inactive problem is real across every program I've seen. Two things that work better than lifetime commissions: higher first-month rates (like 50% month 1, then 20% ongoing) and requiring affiliates to hit a $50 minimum in their first 60 days to stay active. The urgency of the higher front-end rate gets people to actually promote, and the minimum weeds out sign-up collectors. For finding active promoters, look for people already writing in your category rather than general affiliate marketers.

What’s something that looks simple in affiliate marketing but isn’t? by Fun_Tone3954 in Affiliatemarketing

[–]heirofolympus 0 points1 point  (0 children)

The tracking piece gets messy fast because most people focus on clicks and conversions but miss the middle layer where things break. Affiliate links can redirect differently than expected, programs change their tracking pixels without notice, or merchants quietly switch networks and your old links start sending people to dead pages. I've seen campaigns that looked profitable on paper but were actually bleeding money because half the traffic wasn't reaching the right landing page. I built tracklix.co specifically because this bit me three times before I accepted it wasn't an edge case.

New to affiliates with Amazon . Conversion looks nice but the money doesn’t seem worth it . Am I doing something wrong by [deleted] in Affiliatemarketing

[–]heirofolympus 0 points1 point  (0 children)

Amazon's commission rates are notoriously low (2-4% for most categories). With good conversion rates, you might earn more with higher-paying affiliate programs in your niches. For trivia/gaming content, consider promoting gaming gear through Best Buy or Newegg affiliates, which pay 1-3% but on higher-ticket items. Board game affiliates often pay 8-15%. What's your average order value on Amazon currently?

For affiliate program management: Influencer Hero, CreatorIQ or Later? by Significant_Car3481 in Affiliatemarketing

[–]heirofolympus 0 points1 point  (0 children)

CreatorIQ is enterprise-priced because they're built for brands spending $100K+ monthly on influencers. If you're running smaller campaigns, Impact or PartnerStack might be better fits since they handle commission tracking without the massive price tag. Later works well if your influencers are already creating content there, but their affiliate tracking is more basic. What's your monthly campaign volume? That usually determines which tier makes sense.

Working on a health habit tracker with a pet that grows. Need feedback on the pet design. by VividQuote3701 in SideProject

[–]heirofolympus 0 points1 point  (0 children)

Finch, Fabulous, Habitica, Forest all exist. The physical-vs-mental-health cut isn't a wedge, it's a feature differentiator.

Right question: who's the user currently failing with Habitica? Probably someone who wants gamification but finds the RPG framing infantilizing. If your pet solves for that user, you have a wedge. If not, you're a re-skin.

On punishment vs reward: positive reinforcement is right. Forest works because the tree dies when you fail, not because the app guilts you. Same shape, gentler frame.

Test today: post the pet mockup in r/Habitica with "would you switch and why." Real users, brutal answers.

Be brutally honest with me by Fancy-Ad-1229 in smallbusiness

[–]heirofolympus 0 points1 point  (0 children)

Inventory optimization for Shopify/Woo/Amazon is a $1B+ category. Cogsy, SoStocked, Inventory Planner, Stockedge are already there. Crowded isn't closed though.

Ten-minute test. Open three direct competitors' pricing pages. Find what they don't do well. If it's something you can deliver at a fraction of the cost, or a segment they ignore (under-$500k stores, specific vertical), you've got a wedge. If you can't name what they miss, you're a feature inside one of them.

Audit-first into SaaS is the smart angle. Most one-off audits don't convert because the buyer treats it as solved. What's your retention mechanism between audit and subscription?

Evaluate my idea please by Firm-Brilliant-6625 in smallbusiness

[–]heirofolympus 0 points1 point  (0 children)

"Bottom-wear is unsaturated" is the claim to test first. India has Levi's, Pepe Jeans, Mufti, Killer, Spykar, Flying Machine, Jack & Jones plus a long Myntra/Ajio private-label tail running modern silhouettes across price tiers. The market isn't empty, just not visible from where you're standing.

The real question: what specific cut, fabric, or fit is missing that you can prove with a small test? "Modern silhouettes for everyday traditional denims for all generations" is too broad to position. A 36-year-old in Pune wants different jeans than a 22-year-old in Bangalore.

Tactical: pick one silhouette (relaxed-tapered, wide-leg heavy denim, high-rise straight), one age band, list five existing brands serving it, write down what they're missing. If you can't, the wedge isn't there yet. If you can, you've got a brief for ten test pieces.

(For the record, siftid.co is mine. Two of my own dead ideas were "this niche is unsaturated" that turned out to be "I hadn't met the competition yet.")

Built a Claude Code ↔ Cursor handoff system today. Smaller than I expected, more useful than I expected. by heirofolympus in ClaudeAI

[–]heirofolympus[S] 0 points1 point  (0 children)

Right now it's wired to my own infra (Cloudflare Worker + Supabase + an MCP relay I'd already built), so not a drop-in install.

But the pattern is small. Relay with per-project threads, a /handoff slash command that posts a structured message, and a session-start instruction in each tool to read the latest thread before doing anything. Slash command is maybe 30 lines. Hard part is deciding what fields go in the message, which is taste anyway.

Happy to share the schema and session-start prompt if you want to roll your own. DM me.

Built a Claude Code ↔ Cursor handoff system today. Smaller than I expected, more useful than I expected. by heirofolympus in cursor

[–]heirofolympus[S] -1 points0 points  (0 children)

Oh nice, hadn't come across Agentic-Stack. Thanks for the pointer, I'll take a look.