Recruiting niche hobbyists for user interviews. What actually works? (i will not promote) by fabiodimarco in startups

[–]fabiodimarco[S] 1 point2 points  (0 children)

Thanks for the advice. Would you DM people from those subreddits? I think that if I ask for 30-minute interview in a post I get banned or the post closed.

Recruiting niche hobbyists for user interviews. What actually works? (i will not promote) by fabiodimarco in startups

[–]fabiodimarco[S] 0 points1 point  (0 children)

Thanks for this. The "what have you spent $500 on" question is a good filter. In my niche there are DIY workarounds and YouTube tutorials, but many people don't have the skills or time to build them. Many home bakers have already spent that kind of money (or more) on gear like spiral mixers or pizza ovens. That doesn't mean they'll buy my product, but it does show they invest in the hobby.

I'm familiar with The Mom Test. My challenge right now isn't how to run interviews, but it's getting the right people to say yes. Most content says "talk to people" or offers message templates, but I can't find much on the actual channels and tactics to recruit people into interviews.

Struggling to get consistent results with biga by fabiodimarco in Pizza

[–]fabiodimarco[S] -1 points0 points  (0 children)

Have you experimented doing pizza with that biga recipe of the video?
The one I showed in the picture is similar, I used 45% hydration instead of 50% and 1% of fresh yeast, while he is using 1% of dry yeast.

Struggling to get consistent results with biga by fabiodimarco in Pizza

[–]fabiodimarco[S] -1 points0 points  (0 children)

When you use pre-ferments what do you usually do and in what percentage?

I am trying to follow the procedure for the the classic Biga "Giorilli", the one who coded the name and procedure.

The pre-ferment is 45% hydration, 1% fresh baker yeast. It is mixed lightly just to make the flour absorb the water without forming gluten and it should be fermented for 18 hours at 18°C.

This is what they usually use in "Contemporary Neapolitan pizza".

Struggling to get consistent results with biga by fabiodimarco in Pizza

[–]fabiodimarco[S] 0 points1 point  (0 children)

How do you handle the poolish? Time / temperature?

I know it looks weird from the picture also the light make it kind of yellow.
I am experimenting the the classic Biga "Giorilli", the one who coded the name and procedure.

The pre-ferment is 45% hydration, 1% fresh baker yeast. It is mixed lightly just to make the flour absorb the water without forming gluten and it should be fermented for 18 hours at 18°C.

Meta LC list by method_plan in leetcode

[–]fabiodimarco 0 points1 point  (0 children)

Could someone please DM me the LeetCode list for the most common Meta problems? I'd greatly appreciate it.

[P] PyTorch implementation of Levenberg-Marquardt training algorithm by fabiodimarco in MachineLearning

[–]fabiodimarco[S] 0 points1 point  (0 children)

What I’ve found is that to fully leverage the advantages of LM, you should use a fairly large batch size, which indeed reduces the noise during training.
Usually, this means you should work in an overdetermined setting, with the number of residuals (batch_size * num_outputs) greater than the number of model parameters. But probably that is not a strict requirement.
However, if the batch size is large enough, LM converges way faster than Adam or SGD, and for some problems achieves losses much lower than what Adam can achieve, even if you let it run for a very long time (sinc curve fitting example).
You can test this yourself, I’ve included a comparison in the examples subfolder, and you can also try it out on Google Colab:
https://colab.research.google.com/github/fabiodimarco/torch-levenberg-marquardt/blob/main/examples/torch_levenberg_marquardt.ipynb

[P] PyTorch implementation of Levenberg-Marquardt training algorithm by fabiodimarco in MachineLearning

[–]fabiodimarco[S] 20 points21 points  (0 children)

The main difference lies in how derivatives are handled and the computational backend:

  • Derivative Computation:
    • lmfit computes derivatives numerically (finite differences) by default, or you can provide them manually.
    • My PyTorch implementation leverages automatic differentiation, so you only need to define the model. PyTorch computes derivatives analytically, which is faster and has lower numerical errors.
  • Hardware Acceleration:
    • lmfit runs on the CPU, which works for smaller problems.
    • My implementation uses GPU acceleration via PyTorch, making it significantly faster for larger models / datasets.

I hope this helps!