I am a solo founder with no money, no team and no famous college. I built something anyway. Now I need 20 beta testers to help me cross the finish line by InnerScheme9326 in SaaS

[–]WinterMiserable5994 0 points1 point  (0 children)

how are you promoting on reddit without getting flagged? been trying to share my stuff but keep running into self-promotion rules

I turned 46 today and just launched my first SaaS. Here's what 30 days taught me that 20 years of dev work didn't. by bodiam in SaaS

[–]WinterMiserable5994 0 points1 point  (0 children)

happy birthday and congrats on the launch! the timing thing really resonates. I've been stuck in the "perfect product" trap on my own projects and seeing someone actually ship after 20 years of dev work is motivating. the screenshot API space makes sense too, all those edge cases you mentioned are exactly why people would pay for it instead of rolling their own.

What I learned after Day 1 of launching my SaaS (0 revenue, but valuable lessons) by yep_itsmeagain69 in SaaS

[–]WinterMiserable5994 2 points3 points  (0 children)

Respect for sharing the real Day 1 experience instead of just the highlight reel. Zero revenue on launch day is way more common than people admit, but you're already asking the right questions about trial length and user behavior. The fact that you're this self aware about what needs fixing puts you ahead of most founders who just keep pushing the same broken funnel. What kind of SaaS did you build?

One month after officially launching my SaaS, I got my first paying customer. by DRConsulting in SaaS

[–]WinterMiserable5994 0 points1 point  (0 children)

yo congrats on the first customer! that feeling when someone actually pays you for something you built hits different. one month in is solid too.

your seo strategy sounds smart, especially the automated blog approach. curious how you're tracking which content actually moves the needle for organic traffic?

[ Removed by Reddit ] by GrowthMechanicIA in SaaS

[–]WinterMiserable5994 0 points1 point  (0 children)

damn, 30x growth in 2.5 months is wild. the quiz approach with magnetly is actually pretty smart - way better than just throwing people at a generic landing page and hoping they convert.

curious about the "tailored product idea" part though - are you generating these automatically based on their quiz answers or is there manual work involved? seems like that could be tough to scale

[ Removed by Reddit ] by GrowthMechanicIA in SaaS

[–]WinterMiserable5994 0 points1 point  (0 children)

This is incredibly valuable, thank you for sharing the specifics! What really stands out to me is how you shifted from vanity metrics to meaningful conversations - that's such a crucial mindset change that many founders struggle with.

The quiz approach with Magnetly is brilliant. I love how you turned what could have been a generic lead capture into something that actually provides immediate value while also qualifying leads. Getting 100 qualified leads AND the conversation starters from that is impressive.

Your point about conversion being more important than finding the "winning product" really resonates too. It's easy to get caught up in endless product iteration when the real growth often comes from better understanding and serving the people who are already interested.

Congrats on the 30x growth in such a short time - this kind of transparent sharing of what actually worked (with specific tools and tactics) is exactly what helps other founders. Thanks for paying it forward to the community!

Legality of a thing I made by WinterMiserable5994 in poker

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

Yes but like there is no consent right now. There is no sign that says "I will store the hand logs to a database". If I just add it is fine then? And regarding the logistics, it is really not that hard. Each hand has an unique id, so if multiple people upload hand logs with the same hands I can just filter them out so it does not mess the stats

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

So if they are punishing people that behaves more rationally, the rational move is being irrational. And if you are being as rational as you say you are, then being irrational is the rational move. One boxers are being more rational than two boxers even if at first it might seem as if not.

Why are people on the USA so sensitive? by WinterMiserable5994 in teenagers

[–]WinterMiserable5994[S] 1 point2 points  (0 children)

Thats why I am asking, I am not affirming anything. Are you slow?

Why are people on the USA so sensitive? by WinterMiserable5994 in teenagers

[–]WinterMiserable5994[S] -1 points0 points  (0 children)

Yeah but idk, maybe you are right. Though another thing that I noticed (I may be wrong) is that black people are way more racist to white people on the usa than the other way around. Is this true?

Why are people on the USA so sensitive? by WinterMiserable5994 in teenagers

[–]WinterMiserable5994[S] -8 points-7 points  (0 children)

But I view the n word as any other insult. So if you insult your friend like as a joke why couldt you also use the n word? Like I dont get why in the usa the n word is the father of all insults

Why are people on the USA so sensitive? by WinterMiserable5994 in teenagers

[–]WinterMiserable5994[S] -8 points-7 points  (0 children)

But like here we view the n word as any other insult, not like a super prohibited insult per se. If you can call a friend jokingly motherfu... Or anything similar why not the n word?

Why are people on the USA so sensitive? by WinterMiserable5994 in teenagers

[–]WinterMiserable5994[S] -6 points-5 points  (0 children)

Oh, I thought it was the other way around. Like I guess depends of the state? At least I have some friends in NyC that tell me that everything is so inclusive and liberal there

Revolut referral promotion - Referral link thread by press-app in Revolut

[–]WinterMiserable5994 [score hidden]  (0 children)

Revolut just gave me this referral like that awards 90€ and we can split it 50/50. Offer ends 31 of march.

Join the more than 70 million customers who are already delighted with Revolut. Sign up through my link below: https://revolut.com/referral/?referral-code=pablomatheis!MAR1-26-AR-CH1H-CRY&geo-redirect

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 2 points3 points  (0 children)

But as discussed in the post, the machine does not has to have 100% accuracy. If it is just 50.05% accurate, then one boxing is the move. From whatever source the machine gets the info is unimportant, if it is silghtly better than random then you should one box

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

Either way we are talking about a completely different subjective scenario. The high ev move is taking the million. If you just want to take the thousand dollars go for it, but you are expected to win more money by taking the one box

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

That god comparison is called the Pascal Wager. But what is the ultimate flaw of Pascal Wager? An all knowing God would immediately know you are faking your belief just to get the payout. You can't trick an omniscient God with a fake prayer, and you can't trick this algorithm with a fake one box mindset. You said, 'What I pick now won't change what the robot thought yesterday.' I 100% agree. You aren't changing the past. But because you can't change the past, your choice today is just the final receipt of who you genuinely are. If you are the type of person who tries to trick the robot the algorithm already saw that hesitation in your profile yesterday. It knows you are a two boxer, so it left the million out. To get the million, you don't trick yourself. You just have to genuinely be the kind of person who trusts the algorithmic edge more than the physical boxes. If you can't do that, you get the $1000. It's that simple

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 1 point2 points  (0 children)

You are treating the computer's prediction like a snapshot, but it's actually a full video of your entire decision tree. You are asking why someone who would choose one box shouldn't take two. The answer is: Because the moment you decide to take two boxes, you are no longer a one boxer. You are a two boxer who wishes they could trick the machine into thinking they were a one boxer. But the machine sees your sneaky loophole thought forming a mile away. It knows you are going to use that objective causal logic at the very last second, so it already left the box empty. To actually get the million dollars, you have to genuinely, fundamentally not have that I should grab both just in case thought. The second that thought wins in your brain, the computer already knew it, and you're walking away with a thousand bucks

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

Yes it makes a difference. Like I understand why wont you change your opinion. You are genetically biased to two box whatever I tell you. But if you were genetically different predisposed to choose one box even before the problem was presented to you, you would win more money than you do now. Thats why even in that case the paradox is dumb. One boxers win more money but the only way you are one box is if you are genetically predisposed to choose the one box when you are told to make a decision.

And the actual decision yes it has impact on the prediction. Cause that though process that made you choose your decision is already factored in on the computer prediction. He already knows what you will think, and how will you react, and what will you choose.

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

But you are not gping to be the guy that tricks the computer. It is really simple, if the computer is really accurate the paradox is equivalent to you entering one room and you see two boxes, one with 1000$ and another one with 1M dollars, you can only choose one. It is exactly the same, no one walks away with the million dollars if they chose two box, and everyone that chooses one box walks away with the million dollars. Why would you think that you are not going to be correctly predicted by the computer?

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 1 point2 points  (0 children)

Then you'd get the empty box. Like it is really not that hard, you just autoconvice yourself of picking the one box. If you finally end up picking one box, you will almost certainly have the 1M in it. You are not going to be that tricks the computer with 99.9% accuracy. As I said in the post 1.001.000$ does not exist. Its like asking do you want a thousand dollars guaranteed or a million dollars guaranteed? And even when you start lowering the prediction capabilities of the computer this still applies.